Machine Learning CW1
Across
- 2. A machine learning technique that iteratively combines a set of simple and not very accurate classifiers (referred to as "weak" classifiers) into a classifier with high accuracy (a "strong" classifier) by upweighting the examples that the model is currently misclassifying.
- 5. An NxN table that aggregates a classification model's correct and incorrect guesses
- 7. One of a set of enumerated target values for a label.
- 9. Grouping related examples, particularly during unsupervised learning.
- 10. The primary algorithm for performing gradient descent on neural networks
- 14. The fraction of predictions that a classification model got right
- 15. Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations.
- 16. A technique for handling outliers
Down
- 1. A mechanism for estimating how well a model would generalize to new data by testing the model against one or more non-overlapping data subsets withheld from the training set
- 3. A method to train an ensemble where each constituent model trains on a random subset of training examples sampled with replacement.
- 4. Single value Decomposition
- 6. A model used as a reference point for comparing how well another model (typically, a more complex one) is performing.
- 7. The center of a cluster as determined by a k-means or k-median algorithm
- 8. Converting a (usually continuous) feature into multiple binary features called buckets or bins, typically based on value range.
- 10. The set of examples used in one iteration (that is, one gradient update) of model training.
- 11. Principal Component Analysis
- 12. Abbreviation for augmented reality
- 13. A sophisticated gradient descent algorithm that rescales the gradients of each parameter, effectively giving each parameter an independent learning rate.