AI Fundamentals — Hard
Key points
- Positional encoding adds sequence order information to token embeddings
- Self-attention in transformers is permutation-invariant
- Helps the model differentiate between tokens based on their position in the sequence
Ready to go further?
Related questions
