July Newsletter

123456789101112
Across
  1. 3. A metric in classification that measures the proportion of true positives correctly identified by the model out of all actual positives.
  2. 7. A technique for assessing how a machine learning model generalizes to an independent dataset, often using multiple splits of the data.
  3. 9. A configuration that is external to the model and whose value cannot be estimated from the data, often set prior to the learning process to control the behavior of the training algorithm.
  4. 10. In anomaly detection, the process of identifying data points that significantly deviate from the norm.
  5. 12. The number of features or variables in a dataset, which can impact the complexity and performance of machine learning models.
Down
  1. 1. A graph that shows the performance of a machine learning model over time or as a function of the amount of training data, used to diagnose learning progress and potential overfitting or underfitting.
  2. 2. A type of artificial neuron used in machine learning, forming the building blocks of neural networks, capable of binary classification tasks.
  3. 4. A regularization method used in neural networks where randomly selected neurons are ignored during training to prevent overfitting.
  4. 5. The process of scaling individual features to a standard range or distribution, often to improve the performance and convergence of machine learning models.
  5. 6. A technique in ensemble learning where multiple versions of a predictor are trained on different subsets of the data and their predictions are aggregated to improve overall performance.
  6. 8. A Python library designed for efficient out-of-core data processing and visualization, enabling the handling of large datasets that don't fit into memory.
  7. 11. A mathematical function that measures the difference between the predicted values and the actual values, guiding the optimization process during model training.