Deep Learning SH2024

1234567891011121314151617181920
Across
  1. 1. Type of recurrent neural network designed to remember long-term dependencies.
  2. 4. Popular activation function known for its non-linearity and simplicity.
  3. 9. Technique to standardize inputs to a layer, improving training speed and stability.
  4. 14. Deep learning architecture that introduced the concept of residual connections to combat degradation.
  5. 15. Basic building block of neural networks, introduced in 1958.
  6. 16. Problem where gradients become too small, hindering training in deep networks.
  7. 17. Neural network model designed for sequence prediction tasks like language modeling.
  8. 18. Framework consisting of a generator and a discriminator, used for image generation.
  9. 19. A key optimization technique used to minimize loss in neural networks.
  10. 20. Output function used in classification problems, transforming logits into probabilities.
Down
  1. 2. Activation function that maps input values to a range between 0 and 1.
  2. 3. Type of autoencoder used to remove noise from input data.
  3. 5. Mathematical operation used in CNNs to extract spatial features from images.
  4. 6. A pioneering convolutional neural network architecture for digit classification.
  5. 7. Regularization technique to prevent overfitting by randomly deactivating neurons during training.
  6. 8. Algorithm used for training deep neural networks by adjusting weights.
  7. 10. A commonly used loss function for classification problems.
  8. 11. A popular variant of gradient descent known for combining momentum and adaptive learning rates.
  9. 12. Problem where a model performs well on training data but poorly on unseen data.
  10. 13. Neural network model used for unsupervised learning and dimensionality reduction.