25303

1234567891011121314151617181920212223242526272829
Across
  1. 9. = Model is too simple and fails to capture underlying patterns
  2. 11. = Techniques to make model decisions understandable to humans
  3. 13. = Ensemble of decision trees that reduces overfitting
  4. 17. = Technique that transforms features into orthogonal components
  5. 19. = Harmonic mean of precision and recall
  6. 20. = Proportion of true positives correctly identified from actual positives
  7. 21. = Sequential ensemble method that builds models to correct previous errors
  8. 24. = Penalizing model complexity to prevent overfitting
  9. 25. = Model learns noise and performs poorly on new data
  10. 26. = Method to estimate model performance by splitting data into folds
  11. 28. = Visual that shows cumulative gain from model-based targeting
  12. 29. = Loss function that penalizes false classifications with probability estimates
Down
  1. 1. = Choosing the most relevant variables for model building
  2. 2. = Classifier that finds the best boundary between classes
  3. 3. = Tracking model performance over time after deployment
  4. 4. = Analyzing relationships and networks between entities
  5. 5. = Regularization that shrinks coefficients but doesn’t zero them out
  6. 6. = Table showing true vs predicted classification outcomes
  7. 7. = Practices for deploying, monitoring and maintaining machine learning in production
  8. 8. = Measure showing improvement of model versus random selection
  9. 10. = Combining multiple models to improve prediction accuracy
  10. 11. = Initial investigations to discover patterns, spot anomalies and test hypotheses
  11. 12. = Local approximation method for explaining black-box models
  12. 14. = Reducing the number of input variables while retaining information
  13. 15. = Regularization technique that can set some coefficients to zero
  14. 16. = Dividing data into training and testing subsets
  15. 18. = Process of releasing a model for real-world use
  16. 22. = Game-theoretic method to explain individual predictions
  17. 23. = Proportion of true positives among predicted positives
  18. 27. = Metric measuring classifier’s ability across all thresholds