Metric Tools for Sensitivity Analysis with Applications to Neural
Networks
- URL: http://arxiv.org/abs/2305.02368v1
- Date: Wed, 3 May 2023 18:10:21 GMT
- Title: Metric Tools for Sensitivity Analysis with Applications to Neural
Networks
- Authors: Jaime Pizarroso and David Alfaya and Jos\'e Portela and Antonio
Mu\~noz
- Abstract summary: Explainable Artificial Intelligence (XAI) aims to provide interpretations for predictions made by Machine Learning models.
In this paper, a theoretical framework is proposed to study sensitivities of ML models using metric techniques.
A complete family of new quantitative metrics called $alpha$-curves is extracted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Machine Learning models are considered for autonomous decisions with
significant social impact, the need for understanding how these models work
rises rapidly. Explainable Artificial Intelligence (XAI) aims to provide
interpretations for predictions made by Machine Learning models, in order to
make the model trustworthy and more transparent for the user. For example,
selecting relevant input variables for the problem directly impacts the model's
ability to learn and make accurate predictions, so obtaining information about
input importance play a crucial role when training the model. One of the main
XAI techniques to obtain input variable importance is the sensitivity analysis
based on partial derivatives. However, existing literature of this method
provide no justification of the aggregation metrics used to retrieved
information from the partial derivatives.
In this paper, a theoretical framework is proposed to study sensitivities of
ML models using metric techniques. From this metric interpretation, a complete
family of new quantitative metrics called $\alpha$-curves is extracted. These
$\alpha$-curves provide information with greater depth on the importance of the
input variables for a machine learning model than existing XAI methods in the
literature. We demonstrate the effectiveness of the $\alpha$-curves using
synthetic and real datasets, comparing the results against other XAI methods
for variable importance and validating the analysis results with the ground
truth or literature information.
Related papers
- Causal Inference Tools for a Better Evaluation of Machine Learning [0.0]
We introduce key statistical methods such as Ordinary Least Squares (OLS) regression, Analysis of Variance (ANOVA) and logistic regression.
The document serves as a guide for researchers and practitioners, detailing how these techniques can provide deeper insights into model behavior, performance, and fairness.
arXiv Detail & Related papers (2024-10-02T10:03:29Z) - Explainability of Machine Learning Models under Missing Data [2.880748930766428]
Missing data is a prevalent issue that can significantly impair model performance and interpretability.
This paper briefly summarizes the development of the field of missing data and investigates the effects of various imputation methods on the calculation of Shapley values.
arXiv Detail & Related papers (2024-06-29T11:31:09Z) - Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective [106.92016199403042]
We empirically investigate knowledge transfer from larger to smaller models through a parametric perspective.
We employ sensitivity-based techniques to extract and align knowledge-specific parameters between different large language models.
Our findings highlight the critical factors contributing to the process of parametric knowledge transfer.
arXiv Detail & Related papers (2023-10-17T17:58:34Z) - Characterizing the contribution of dependent features in XAI methods [6.990173577370281]
We propose a proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors.
The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.
arXiv Detail & Related papers (2023-04-04T11:25:57Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.