Across
- 2. A type of learning where the model learns from labeled data
- 4. Layer functioning as FCNN in VGG16
- 5. Technique of creating new training examples by applying transformations
- 7. Strategy for setting initial values in a neural network
- 9. Table used to assess the performance of a classification model
- 10. A hyperparameter tuning process
- 12. No. of trainable parameters in GRU
- 13. Neural network designed for sequential data
- 15. Google's pretrained ML model
- 16. Backpropagation algorithm for updating weights in an RNN
- 19. Another name for True Negative Rate in Confusion matrix
- 20. One complete pass through the entire training dataset
- 24. Activation function used in the output layer for classification
- 29. A pooling operation that computes the average value in each region
- 30. Evaluation of a model on a separate dataset to tune hyperparameters
- 31. Activation function similar to sigmoid, ranges from -1 to 1
- 33. An algorithm used to update the weights in a neural network
- 34. Model performs well on training data but poorly on new data
- 36. Controls the size of steps in gradient descent
- 37. Training algorithm for updating weights in a neural network
- 38. Error Square algorithm for updating weights in neural networks
- 40. A pooling operation that selects the maximum value in each region
- 42. A pretrained Model with 13 CNN and 3 Dense Layers
- 43. Issue where gradients become very small during training
- 44. A layer to convert a multi-dimensional tensor into a 1D vector
- 47. A classification problem where each sample can belong to one of two classes
- 48. Technique to prevent overfitting nodes in neural networks
- 50. Architecture where information flows in one direction
- 51. A technique to stop training when the validation loss stops improving
- 52. Optimization algorithm that accumulates past gradients to accelerate learning
- 54. Basic building block of a neural network
- 56. Adaptive Linear Neuron, a single-layer neural network
- 58. Determines the output of a node in a neural network
- 60. Key operation in Convolutional Neural Networks (CNNs)
- 61. A technique to fine-tune a pre-trained model for a new task
- 62. Technique to prevent exploding gradients in RNN
Down
- 1. Optimization algorithm for finding the minimum
- 3. A type of Neural Network handling sequential data
- 6. Optimization algorithm using one example at a time
- 8. Process of selecting and representing relevant information from input data
- 11. Standard version of a neural network or algorithm
- 14. Categorical Loss function in CNN
- 17. Adapts a pre-trained model for a new task
- 18. Also known as recall or TPR
- 21. a CNN Layer where dimensions of the input are reduced
- 22. Configuration setting parameters to the model
- 23. No. of trainable parameters in LSTM
- 25. A set of data samples used in one iteration of training
- 26. Popular activation function introducing non-linearity
- 27. A type of learning where the model learns from unlabeled data
- 28. Simplest form of a neural network, single-layer binary classifier
- 32. Additional parameter representing an offset in neural networks
- 35. An unsupervised Learning rule
- 39. A loss function used in regression problems in CNN
- 41. Non-linear transformation applied to a neuron's output
- 45. Linearly inseparable logic gate
- 46. Model with high bias and low variance
- 49. a type of recurrent neural network cell with memory like operation
- 53. Adding extra pixels or values around the edges of an image in CNN
- 55. Another name for Kernel to reduce dimensions in CNNs
- 57. The number of pixels to slide the kernel across the input
- 59. A type of recurrent neural network with memory cell
