Gen AI and LLM - Set 2
Across
- 3. An input given to a model to generate a response or output.
- 4. The primary output of language models, consisting of written words.
- 6. A conversational interaction, often facilitated by AI.
- 9. Adjusting a pre-trained model on a smaller dataset for specific tasks.
- 10. A type of neural network architecture that underpins many modern LLMs.
- 11. A unit of text processed by language models, typically a word or subword.
- 13. A modeling error where a model learns noise in the training data instead of the underlying pattern.
- 15. The surrounding information that helps models understand the meaning of text.
Down
- 1. A learning approach where a model performs tasks without specific training examples for those tasks.
- 2. Information used to train AI models, including text and images.
- 5. The process of teaching a model using large datasets to improve its performance.
- 7. Referring to models that can create new content, such as text or images.
- 8. A numerical representation of words or phrases in vector space for better processing by models.
- 12. The act of generating predictions or outputs from a trained model.
- 14. A mechanism in neural networks that allows models to focus on relevant parts of the input data when generating output.