Score-Based Explanations in Data Management and Machine Learning: An
Answer-Set Programming Approach to Counterfactual Analysis
- URL: http://arxiv.org/abs/2106.10562v1
- Date: Sat, 19 Jun 2021 19:21:48 GMT
- Title: Score-Based Explanations in Data Management and Machine Learning: An
Answer-Set Programming Approach to Counterfactual Analysis
- Authors: Leopoldo Bertossi
- Abstract summary: We describe some recent approaches to score-based explanations for query answers in databases and outcomes from classification models in machine learning.
Special emphasis is placed on declarative approaches based on answer-set programming to the use of counterfactual reasoning for score specification and computation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe some recent approaches to score-based explanations for query
answers in databases and outcomes from classification models in machine
learning. The focus is on work done by the author and collaborators. Special
emphasis is placed on declarative approaches based on answer-set programming to
the use of counterfactual reasoning for score specification and computation.
Several examples that illustrate the flexibility of these methods are shown.
Related papers
- Attribution-Scores in Data Management and Explainable Machine Learning [0.0]
We describe recent research on the use of actual causality in the definition of responsibility scores in databases.
In the case of databases, useful connections with database repairs are illustrated and exploited.
For classification models, the responsibility score is properly extended and illustrated.
arXiv Detail & Related papers (2023-07-31T22:41:17Z) - From Database Repairs to Causality in Databases and Beyond [0.0]
We describe some recent approaches to score-based explanations for query answers in databases.
Special emphasis is placed on the use of counterfactual reasoning for score specification and computation.
arXiv Detail & Related papers (2023-06-15T04:08:23Z) - A Mechanistic Interpretation of Arithmetic Reasoning in Language Models
using Causal Mediation Analysis [128.0532113800092]
We present a mechanistic interpretation of Transformer-based LMs on arithmetic questions.
This provides insights into how information related to arithmetic is processed by LMs.
arXiv Detail & Related papers (2023-05-24T11:43:47Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Reasoning about Counterfactuals and Explanations: Problems, Results and
Directions [0.0]
These approaches are flexible and modular in that they allow the seamless addition of domain knowledge.
The programs can be used to specify and compute responsibility-based numerical scores as attributive explanations for classification results.
arXiv Detail & Related papers (2021-08-25T01:04:49Z) - A Framework and Benchmarking Study for Counterfactual Generating Methods
on Tabular Data [0.0]
Counterfactual explanations are viewed as an effective way to explain machine learning predictions.
There are already dozens of algorithms aiming to generate such explanations.
benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context.
arXiv Detail & Related papers (2021-07-09T21:06:03Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Score-Based Explanations in Data Management and Machine Learning [0.0]
We consider explanations for query answers in databases, and for results from classification models.
The described approaches are mostly of a causal and counterfactual nature.
arXiv Detail & Related papers (2020-07-24T23:13:27Z) - Instance-Based Learning of Span Representations: A Case Study through
Named Entity Recognition [48.06319154279427]
We present a method of instance-based learning that learns similarities between spans.
Our method enables to build models that have high interpretability without sacrificing performance.
arXiv Detail & Related papers (2020-04-29T23:32:42Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.