ml
Across
- 3. Algorithm used to update weights during training
- 7. The term for adjusting weight values using gradients
- 10. When market or consumer behaviour changes, causing model accuracy to drop
- 13. The final dataset used to evaluate general performance
- 14. The subset of data used for hyperparameter tuning and early stopping
- 16. One complete cycle through the training dataset
- 18. Neural network element that receives weighted inputs
- 19. A problem where a model fits too well to training data but performs poorly on new data
- 21. The process of dividing data into training, validation, and testing subsets
- 23. A computational model inspired by the human brain used for prediction and learning
- 24. When a model is too simple and fails to capture business patterns
- 25. Number of samples processed before weights are updated
Down
- 1. When a trained model memorizes the training data too closely and performs poorly on the testing dataset
- 2. Error resulting from overly simplistic models
- 4. Function measuring how far predictions deviate from actuals
- 5. The phenomenon where model performance worsens due to changes in market or business conditions
- 6. Evaluation method that mitigates dependence on one specific train–test split
- 8. Provides nonlinearity inside a neural network
- 9. A matrix of pixel values fed into a neural model
- 11. Input and output example pair used during training
- 12. A plot comparing training and validation error across epochs
- 15. Opposite of overfitting
- 17. Error caused by overly complex models
- 20. Popular activation function used in modern neural networks
- 22. Direction of steepest descent during optimization