What is the difference between LIME and SHAP for model interpretability?

Machine Learning Hard

Machine Learning — Hard

What is the difference between LIME and SHAP for model interpretability?

Key points

  • LIME uses local surrogate models, SHAP uses Shapley values
  • LIME approximates model locally, SHAP assigns global feature attributions
  • LIME focuses on interpretability around specific predictions, SHAP provides consistent attributions

Ready to go further?

Related questions