On Logic-Based Explainability with Partially Specified Inputs
- URL: http://arxiv.org/abs/2306.15803v1
- Date: Tue, 27 Jun 2023 21:09:25 GMT
- Title: On Logic-Based Explainability with Partially Specified Inputs
- Authors: Ram\'on B\'ejar and Ant\'onio Morgado and Jordi Planes and Joao
Marques-Silva
- Abstract summary: Missing data is often addressed when training machine learning (ML) models.
But missing data also needs to be addressed when deciding predictions and when explaining those predictions.
This paper studies the computation of logic-based explanations in the presence of partially specified inputs.
- Score: 1.7587442088965224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the practical deployment of machine learning (ML) models, missing data
represents a recurring challenge. Missing data is often addressed when training
ML models. But missing data also needs to be addressed when deciding
predictions and when explaining those predictions. Missing data represents an
opportunity to partially specify the inputs of the prediction to be explained.
This paper studies the computation of logic-based explanations in the presence
of partially specified inputs. The paper shows that most of the algorithms
proposed in recent years for computing logic-based explanations can be
generalized for computing explanations given the partially specified inputs.
One related result is that the complexity of computing logic-based explanations
remains unchanged. A similar result is proved in the case of logic-based
explainability subject to input constraints. Furthermore, the proposed solution
for computing explanations given partially specified inputs is applied to
classifiers obtained from well-known public datasets, thereby illustrating a
number of novel explainability use cases.
Related papers
- Selective Explanations [14.312717332216073]
A machine learning model is trained to predict feature attribution scores with only one inference.
Despite their efficiency, amortized explainers can produce inaccurate predictions and misleading explanations.
We propose selective explanations, a novel feature attribution method that detects when amortized explainers generate low-quality explanations.
arXiv Detail & Related papers (2024-05-29T23:08:31Z) - A Mechanistic Interpretation of Arithmetic Reasoning in Language Models
using Causal Mediation Analysis [128.0532113800092]
We present a mechanistic interpretation of Transformer-based LMs on arithmetic questions.
This provides insights into how information related to arithmetic is processed by LMs.
arXiv Detail & Related papers (2023-05-24T11:43:47Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Feature Necessity & Relevancy in ML Classifier Explanations [5.232306238197686]
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.
It is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations.
arXiv Detail & Related papers (2022-10-27T12:12:45Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - LogicInference: A New Dataset for Teaching Logical Inference to seq2seq
Models [4.186923466475792]
This paper presents LogicInference, a new dataset to evaluate the ability of models to perform logical inference.
The dataset focuses on inference using propositional logic and a small subset of first-order logic.
We also report initial results using a collection of machine learning models to establish an initial baseline in this dataset.
arXiv Detail & Related papers (2022-03-28T21:13:22Z) - Explaining Reject Options of Learning Vector Quantization Classifiers [6.125017875330933]
We propose to use counterfactual explanations for explaining rejects in machine learning models.
We investigate how to efficiently compute counterfactual explanations of different reject options for an important class of models.
arXiv Detail & Related papers (2022-02-15T08:16:10Z) - Structural Learning of Probabilistic Sentential Decision Diagrams under
Partial Closed-World Assumption [127.439030701253]
Probabilistic sentential decision diagrams are a class of structured-decomposable circuits.
We propose a new scheme based on a partial closed-world assumption: data implicitly provide the logical base of the circuit.
Preliminary experiments show that the proposed approach might properly fit training data, and generalize well to test data, provided that these remain consistent with the underlying logical base.
arXiv Detail & Related papers (2021-07-26T12:01:56Z) - Quantum Algorithms for Data Representation and Analysis [68.754953879193]
We provide quantum procedures that speed-up the solution of eigenproblems for data representation in machine learning.
The power and practical use of these subroutines is shown through new quantum algorithms, sublinear in the input matrix's size, for principal component analysis, correspondence analysis, and latent semantic analysis.
Results show that the run-time parameters that do not depend on the input's size are reasonable and that the error on the computed model is small, allowing for competitive classification performances.
arXiv Detail & Related papers (2021-04-19T00:41:43Z) - Score-Based Explanations in Data Management and Machine Learning [0.0]
We consider explanations for query answers in databases, and for results from classification models.
The described approaches are mostly of a causal and counterfactual nature.
arXiv Detail & Related papers (2020-07-24T23:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.