Context-aware feature attribution through argumentation
- URL: http://arxiv.org/abs/2310.16157v1
- Date: Tue, 24 Oct 2023 20:02:02 GMT
- Title: Context-aware feature attribution through argumentation
- Authors: Jinfeng Zhong, Elsa Negre
- Abstract summary: We define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA)
Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature attribution is a fundamental task in both machine learning and data
analysis, which involves determining the contribution of individual features or
variables to a model's output. This process helps identify the most important
features for predicting an outcome. The history of feature attribution methods
can be traced back to General Additive Models (GAMs), which extend linear
regression models by incorporating non-linear relationships between dependent
and independent variables. In recent years, gradient-based methods and
surrogate models have been applied to unravel complex Artificial Intelligence
(AI) systems, but these methods have limitations. GAMs tend to achieve lower
accuracy, gradient-based methods can be difficult to interpret, and surrogate
models often suffer from stability and fidelity issues. Furthermore, most
existing methods do not consider users' contexts, which can significantly
influence their preferences. To address these limitations and advance the
current state-of-the-art, we define a novel feature attribution framework
called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our
framework harnesses the power of argumentation by treating each feature as an
argument that can either support, attack or neutralize a prediction.
Additionally, CA-FATA formulates feature attribution as an argumentation
procedure, and each computation has explicit semantics, which makes it
inherently interpretable. CA-FATA also easily integrates side information, such
as users' contexts, resulting in more accurate predictions.
Related papers
- DISCO: DISCovering Overfittings as Causal Rules for Text Classification Models [6.369258625916601]
Post-hoc interpretability methods fail to capture the models' decision-making process fully.
Our paper introduces DISCO, a novel method for discovering global, rule-based explanations.
DISCO supports interactive explanations, enabling human inspectors to distinguish spurious causes in the rule-based output.
arXiv Detail & Related papers (2024-11-07T12:12:44Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - When factorization meets argumentation: towards argumentative explanations [0.0]
We propose a novel model that combines factorization-based methods with argumentation frameworks (AFs)
Our framework seamlessly incorporates side information, such as user contexts, leading to more accurate predictions.
arXiv Detail & Related papers (2024-05-13T19:16:28Z) - IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight [4.010646933005848]
IGANN Sparse is a novel machine learning model from the family of generalized additive models.
It promotes sparsity through a non-linear feature selection process during training.
This ensures interpretability through improved model sparsity without sacrificing predictive performance.
arXiv Detail & Related papers (2024-03-17T22:44:36Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Path Integrals for the Attribution of Model Uncertainties [0.18899300124593643]
We present a novel algorithm that relies on in-distribution curves connecting a feature vector to some counterfactual counterpart.
We validate our approach on benchmark image data sets with varying resolution, and show that it significantly simplifies interpretability.
arXiv Detail & Related papers (2021-07-19T11:07:34Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.