Machine Learning — Hard
Key points
- LoRA injects low-rank decomposition matrices into attention layers
- Full fine-tuning updates all model parameters
- LoRA reduces memory requirements
- LoRA achieves comparable performance
- Full fine-tuning updates all layers including attention
Ready to go further?
Related questions
