TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/LIME

LIME

Local Interpretable Model-Agnostic Explanations

GeneralIntroduced 2000378 papers
Source Paper

Description

LIME, or Local Interpretable Model-Agnostic Explanations, is an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. It performs the role of an "explainer" to explain predictions from each data sample. The output of LIME is a set of explanations representing the contribution of each feature to a prediction for a single sample, which is a form of local interpretability.

Interpretable models in LIME can be, for instance, linear regression or decision trees, which are trained on small perturbations (e.g. adding noise, removing words, hiding parts of the image) of the original model to provide a good local approximation.

Papers Using This Method

Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack2025-06-30IXAII: An Interactive Explainable Artificial Intelligence Interface for Decision Support Systems2025-06-26Explainable AI for Radar Resource Management: Modified LIME in Deep Reinforcement Learning2025-06-26Analyzing Emotions in Bangla Social Media Comments Using Machine Learning and LIME2025-06-11Local MDI+: Local Feature Importances for Tree-Based Models2025-06-10A Comprehensive Analysis of COVID-19 Detection Using Bangladeshi Data and Explainable AI2025-06-08Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST1002025-06-01Multi-criteria Rank-based Aggregation for Explainable AI2025-05-30Interpretable phenotyping of Heart Failure patients with Dutch discharge letters2025-05-30DiffLIME: Enhancing Explainability with a Diffusion-Based LIME Algorithm for Fault Diagnosis2025-05-30MLRan: A Behavioural Dataset for Ransomware Analysis and Detection2025-05-24Towards Trustworthy Keylogger detection: A Comprehensive Analysis of Ensemble Techniques and Feature Selections through Explainable AI2025-05-22Comprehensive Lung Disease Detection Using Deep Learning Models and Hybrid Chest X-ray Data with Explainable AI2025-05-21CSAGC-IDS: A Dual-Module Deep Learning Network Intrusion Detection Model for Complex and Imbalanced Data2025-05-20Explainable AI for Securing Healthcare in IoT-Integrated 6G Wireless Networks2025-05-20Minimizing False-Positive Attributions in Explanations of Non-Linear Models2025-05-16Enhanced Photonic Chip Design via Interpretable Machine Learning Techniques2025-05-14Deeply Explainable Artificial Neural Network2025-05-10Interactive Diabetes Risk Prediction Using Explainable Machine Learning: A Dash-Based Approach with SHAP, LIME, and Comorbidity Insights2025-05-08Exploring Convolutional Neural Networks for Rice Grain Classification: An Explainable AI Approach2025-05-07