OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
- URL: http://arxiv.org/abs/2006.05714v3
- Date: Mon, 7 Feb 2022 19:21:23 GMT
- Title: OptiLIME: Optimized LIME Explanations for Diagnostic Computer Algorithms
- Authors: Giorgio Visani, Enrico Bagli, Federico Chesani
- Abstract summary: Local Interpretable Model-Agnostic Explanations (LIME) is a popular method to perform interpretability of any kind of Machine Learning (ML) model.
LIME is widespread across different domains, although its instability is one of the major shortcomings.
We propose a framework to maximise stability, while retaining a predefined level of adherence.
- Score: 2.570261777174546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Local Interpretable Model-Agnostic Explanations (LIME) is a popular method to
perform interpretability of any kind of Machine Learning (ML) model. It
explains one ML prediction at a time, by learning a simple linear model around
the prediction. The model is trained on randomly generated data points, sampled
from the training dataset distribution and weighted according to the distance
from the reference point - the one being explained by LIME. Feature selection
is applied to keep only the most important variables. LIME is widespread across
different domains, although its instability - a single prediction may obtain
different explanations - is one of the major shortcomings. This is due to the
randomness in the sampling step, as well as to the flexibility in tuning the
weights and determines a lack of reliability in the retrieved explanations,
making LIME adoption problematic. In Medicine especially, clinical
professionals trust is mandatory to determine the acceptance of an explainable
algorithm, considering the importance of the decisions at stake and the related
legal issues. In this paper, we highlight a trade-off between explanation's
stability and adherence, namely how much it resembles the ML model. Exploiting
our innovative discovery, we propose a framework to maximise stability, while
retaining a predefined level of adherence. OptiLIME provides freedom to choose
the best adherence-stability trade-off level and more importantly, it clearly
highlights the mathematical properties of the retrieved explanation. As a
result, the practitioner is provided with tools to decide whether the
explanation is reliable, according to the problem at hand. We extensively test
OptiLIME on a toy dataset - to present visually the geometrical findings - and
a medical dataset. In the latter, we show how the method comes up with
meaningful explanations both from a medical and mathematical standpoint.
Related papers
- Towards Large Language Models with Self-Consistent Natural Language Explanations [11.085839471231552]
Large language models (LLMs) seem to offer an easy path to interpretability.<n>Yet, studies show that these post-hoc explanations often misrepresent the true decision process.
arXiv Detail & Related papers (2025-06-09T08:06:33Z) - LLM-based Agent Simulation for Maternal Health Interventions: Uncertainty Estimation and Decision-focused Evaluation [30.334268991701727]
Agent-based simulation is crucial for modeling complex human behavior.
Traditional approaches require extensive domain knowledge and large datasets.
Large language models (LLMs) offer a promising alternative by leveraging broad world knowledge.
arXiv Detail & Related papers (2025-03-25T20:24:47Z) - MindfulLIME: A Stable Solution for Explanations of Machine Learning Models with Enhanced Localization Precision -- A Medical Image Case Study [0.7373617024876725]
We propose MindfulLIME, a novel algorithm that generates visual explanations using a graph-based pruning algorithm and uncertainty sampling.
Our experimental evaluation, conducted on a widely recognized chest X-ray dataset, confirms MindfulLIME's stability with a 100% success rate.
MindfulLIME improves the localization precision of visual explanations by reducing the distance between the generated explanations and the actual local annotations.
arXiv Detail & Related papers (2025-03-25T14:48:14Z) - Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
This study proposes using large language models (LLMs) to elicit expert prior distributions for predictive models.
We compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation.
Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - Uncertainty Estimation of Large Language Models in Medical Question Answering [60.72223137560633]
Large Language Models (LLMs) show promise for natural language generation in healthcare, but risk hallucinating factually incorrect information.
We benchmark popular uncertainty estimation (UE) methods with different model sizes on medical question-answering datasets.
Our results show that current approaches generally perform poorly in this domain, highlighting the challenge of UE for medical applications.
arXiv Detail & Related papers (2024-07-11T16:51:33Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - Interpretable Medical Diagnostics with Structured Data Extraction by
Large Language Models [59.89454513692417]
Tabular data is often hidden in text, particularly in medical diagnostic reports.
We propose a novel, simple, and effective methodology for extracting structured tabular data from textual medical reports, called TEMED-LLM.
We demonstrate that our approach significantly outperforms state-of-the-art text classification models in medical diagnostics.
arXiv Detail & Related papers (2023-06-08T09:12:28Z) - Topological Interpretability for Deep-Learning [0.30806551485143496]
Deep learning (DL) models cannot quantify the certainty of their predictions.
This work presents a method to infer prominent features in two DL classification models trained on clinical and non-clinical text.
arXiv Detail & Related papers (2023-05-15T13:38:13Z) - Near-optimal Offline Reinforcement Learning with Linear Representation:
Leveraging Variance Information with Pessimism [65.46524775457928]
offline reinforcement learning seeks to utilize offline/historical data to optimize sequential decision-making strategies.
We study the statistical limits of offline reinforcement learning with linear model representations.
arXiv Detail & Related papers (2022-03-11T09:00:12Z) - Scrutinizing XAI using linear ground-truth data with suppressor
variables [0.8602553195689513]
Saliency methods rank input features according to some measure of 'importance'
It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables)
arXiv Detail & Related papers (2021-11-14T23:02:02Z) - Locally Interpretable Model Agnostic Explanations using Gaussian
Processes [2.9189409618561966]
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique for explaining the prediction of a single instance.
We propose a Gaussian Process (GP) based variation of locally interpretable models.
We demonstrate that the proposed technique is able to generate faithful explanations using much fewer samples as compared to LIME.
arXiv Detail & Related papers (2021-08-16T05:49:01Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.