Across
- 3. A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
- 6. A scenario where a model can perform tasks it hasn't been explicitly trained on. In generative models, zero-shot learning can involve generating data from categories not present in the training data.
- 7. Neural networks designed to work with sequences of data, where information from previous steps is carried forward. They are often used for text generation, speech recognition, and time series prediction.
- 8. A unique language modeling technique called "permutation language modeling" has been employed in the development of this LLM, jointly created by Carnegie Mellon University and Google
Down
- 1. The act of dissecting text or other data forms into smaller components, known as tokens, is termed tokenization. In the context of language models, tokens encompass words, subwords, or characters.
- 2. An approach developed by Ian Goodfellow which employs a pair of neural networks, namely the generator and the discriminator, trained in a adversarial fashion.
- 4. A neural network architecture used for data compression and reconstruction. It consists of an encoder that maps input data to a lower-dimensional latent representation and a decoder that reconstructs the original data from the latent representation
- 5. A new generative AI assistant in SAP
