Deep Learning terminology
Across
- 7. Process of adjusting the parameters of a model to minimize the loss function, typically using algorithms like gradient descent.
- 10. A regularization technique commonly used in deep learning to prevent overfitting. Dropout randomly "drops out" (sets to zero) a fraction of the neurons in a layer during training, forcing the network to learn more robust and generalizable features.
- 11. Operation used in convolutional neural networks (CNNs) to reduce the spatial dimensions of feature maps by aggregating information from local regions.
- 13. An optimization algorithm used to minimize the loss function during the training of neural networks. It works by iteratively adjusting the parameters (weights and biases) of the network in the direction of the steepest descent of the loss surface.
Down
- 1. A subset of machine learning where artificial neural networks with multiple layers (deep neural networks) are trained on large datasets to learn hierarchical representations of data, leading to more sophisticated learning and decision-making.
- 2. Hyperparameter that determines the size of the steps taken during optimization, influencing the speed and stability of the training process.
- 3. A phenomenon that occurs when a machine learning model learns to memorize the training data instead of generalizing from it. Overfitting occurs when a model is excessively complex relative to the amount of training data available.
- 4. Technique used to prevent overfitting by adding a penalty term to the loss function that discourages overly complex model parameters.
- 5. A method for training neural networks by computing the gradient of the loss function with respect to the network's parameters. Backpropagation allows for efficient adjustment of the network's weights and biases based on the error signal propagated backward through the network.
- 6. Process of further training a pre-trained model on new data to adapt it to a specific task or domain.
- 8. Computing systems inspired by the biological neural networks of animal brains. ANNs consist of interconnected nodes, or "neurons," organized in layers. Deep learning models are a type of ANN with many layers.
- 9. A mathematical function applied to the output of each neuron in a neural network layer. Activation functions introduce non-linearities into the network, enabling it to learn complex mappings between inputs and outputs.
- 12. A function used to compute the similarity between pairs of data points in kernel methods, such as Support Vector Machines (SVMs) and kernelized versions of neural networks.
- 14. One complete pass through the entire training dataset during the optimization process.