What is ‘in-context learning’ in large language models and how does it differ from fine-tuning?

AI Fundamentals Hard

AI Fundamentals — Hard

What is ‘in-context learning’ in large language models and how does it differ from fine-tuning?

Key points

  • In-context learning allows LLMs to adapt to new tasks without changing their underlying structure
  • Fine-tuning permanently updates model weights based on new data
  • In-context learning is more flexible and efficient for quick adaptation to new tasks

Ready to go further?

Related questions