Machine Learning CW1

12345678910111213141516
Across
  1. 2. A machine learning technique that iteratively combines a set of simple and not very accurate classifiers (referred to as "weak" classifiers) into a classifier with high accuracy (a "strong" classifier) by upweighting the examples that the model is currently misclassifying.
  2. 5. An NxN table that aggregates a classification model's correct and incorrect guesses
  3. 7. One of a set of enumerated target values for a label.
  4. 9. Grouping related examples, particularly during unsupervised learning.
  5. 10. The primary algorithm for performing gradient descent on neural networks
  6. 14. The fraction of predictions that a classification model got right
  7. 15. Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations.
  8. 16. A technique for handling outliers
Down
  1. 1. A mechanism for estimating how well a model would generalize to new data by testing the model against one or more non-overlapping data subsets withheld from the training set
  2. 3. A method to train an ensemble where each constituent model trains on a random subset of training examples sampled with replacement.
  3. 4. Single value Decomposition
  4. 6. A model used as a reference point for comparing how well another model (typically, a more complex one) is performing.
  5. 7. The center of a cluster as determined by a k-means or k-median algorithm
  6. 8. Converting a (usually continuous) feature into multiple binary features called buckets or bins, typically based on value range.
  7. 10. The set of examples used in one iteration (that is, one gradient update) of model training.
  8. 11. Principal Component Analysis
  9. 12. Abbreviation for augmented reality
  10. 13. A sophisticated gradient descent algorithm that rescales the gradients of each parameter, effectively giving each parameter an independent learning rate.