Across
- 4. Model with high bias and low variance
- 7. Activation function used in the output layer for classification
- 8. Categorical Loss function in CNN
- 11. Training algorithm for updating weights in a neural network
- 12. Google's pretrained ML model
- 15. Popular activation function introducing non-linearity
- 16. Technique of creating new training examples by applying transformations
- 19. Another name for Kernel to reduce dimensions in CNNs
- 22. A type of recurrent neural network with memory cell
- 23. A type of Neural Network handling sequential data
- 27. Optimization algorithm using one example at a time
- 28. Issue where gradients become very small during training
- 29. Adapts a pre-trained model for a new task
- 32. Optimization algorithm that accumulates past gradients to accelerate learning
- 34. A set of data samples used in one iteration of training
- 35. Process of selecting and representing relevant information from input data
- 39. Controls the size of steps in gradient descent
- 40. Strategy for setting initial values in a neural network
- 42. Optimization algorithm for finding the minimum
- 43. Technique to prevent overfitting nodes in neural networks
- 44. Adaptive Linear Neuron, a single-layer neural network
- 45. Determines the output of a node in a neural network
Down
- 1. Configuration setting parameters to the model
- 2. Non-linear transformation applied to a neuron's output
- 3. Key operation in Convolutional Neural Networks (CNNs)
- 5. Activation function similar to sigmoid, ranges from -1 to 1
- 6. a type of recurrent neural network cell with memory like operation
- 7. No. of trainable parameters in GRU
- 9. Simplest form of a neural network, single-layer binary classifier
- 10. Also known as recall or TPR
- 13. Table used to assess the performance of a classification model
- 14. A pretrained Model with 13 CNN and 3 Dense Layers
- 17. Model performs well on training data but poorly on new data
- 18. No. of trainable parameters in LSTM
- 20. a CNN Layer where dimensions of the input are reduced
- 21. Neural network designed for sequential data
- 22. Error Square algorithm for updating weights in neural networks
- 24. Layer functioning as FCNN in VGG16
- 25. Basic building block of a neural network
- 26. Additional parameter representing an offset in neural networks
- 30. One complete pass through the entire training dataset
- 31. Technique to prevent exploding gradients in RNN
- 33. Architecture where information flows in one direction
- 36. Adding extra pixels or values around the edges of an image in CNN
- 37. Evaluation of a model on a separate dataset to tune hyperparameters
- 38. An unsupervised Learning rule
- 41. Standard version of a neural network or algorithm
