Across
- 1. Information used to train AI models, including text and images.
- 5. A numerical representation of words or phrases in vector space for better processing by models.
- 6. An input given to a model to generate a response or output.
- 8. A conversational interaction, often facilitated by AI.
- 9. Adjusting a pre-trained model on a smaller dataset for specific tasks.
- 11. The process of teaching a model using large datasets to improve its performance.
- 12. Referring to models that can create new content, such as text or images.
- 13. A mechanism in neural networks that allows models to focus on relevant parts of the input data when generating output.
- 15. A learning approach where a model performs tasks without specific training examples for those tasks.
Down
- 2. A type of neural network architecture that underpins many modern LLMs.
- 3. A modeling error where a model learns noise in the training data instead of the underlying pattern.
- 4. The primary output of language models, consisting of written words.
- 7. The surrounding information that helps models understand the meaning of text.
- 10. A unit of text processed by language models, typically a word or subword.
- 14. The act of generating predictions or outputs from a trained model.
