On Interpretability and Similarity in Concept-Based Machine Learning
- URL: http://arxiv.org/abs/2102.12723v1
- Date: Thu, 25 Feb 2021 07:57:28 GMT
- Title: On Interpretability and Similarity in Concept-Based Machine Learning
- Authors: L\'eonard Kwuida and Dmitry I. Ignatov
- Abstract summary: We discuss how notions from cooperative game theory can be used to assess the contribution of individual attributes in classification and clustering processes in concept-based machine learning.
To address the 3rd question, we present some ideas on how to reduce the number of attributes using similarities in large contexts.
- Score: 2.3986080077861787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) provides important techniques for classification and
predictions. Most of these are black-box models for users and do not provide
decision-makers with an explanation. For the sake of transparency or more
validity of decisions, the need to develop explainable/interpretable ML-methods
is gaining more and more importance. Certain questions need to be addressed:
How does an ML procedure derive the class for a particular entity? Why does a
particular clustering emerge from a particular unsupervised ML procedure? What
can we do if the number of attributes is very large? What are the possible
reasons for the mistakes for concrete cases and models?
For binary attributes, Formal Concept Analysis (FCA) offers techniques in
terms of intents of formal concepts, and thus provides plausible reasons for
model prediction. However, from the interpretable machine learning viewpoint,
we still need to provide decision-makers with the importance of individual
attributes to the classification of a particular object, which may facilitate
explanations by experts in various domains with high-cost errors like medicine
or finance.
We discuss how notions from cooperative game theory can be used to assess the
contribution of individual attributes in classification and clustering
processes in concept-based machine learning. To address the 3rd question, we
present some ideas on how to reduce the number of attributes using similarities
in large contexts.
Related papers
- Even-if Explanations: Formal Foundations, Priorities and Complexity [18.126159829450028]
We show that both linear and tree-based models are strictly more interpretable than neural networks.
We introduce a preference-based framework that enables users to personalize explanations based on their preferences.
arXiv Detail & Related papers (2024-01-17T11:38:58Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - What Makes a Good Explanation?: A Harmonized View of Properties of Explanations [22.752085594102777]
Interpretability provides a means for humans to verify aspects of machine learning (ML) models.
Different contexts require explanations with different properties.
There is a lack of standardization when it comes to properties of explanations.
arXiv Detail & Related papers (2022-11-10T16:04:28Z) - Feature Necessity & Relevancy in ML Classifier Explanations [5.232306238197686]
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.
It is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations.
arXiv Detail & Related papers (2022-10-27T12:12:45Z) - Learn to Explain: Multimodal Reasoning via Thought Chains for Science
Question Answering [124.16250115608604]
We present Science Question Answering (SQA), a new benchmark that consists of 21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations.
We show that SQA improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA.
Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data.
arXiv Detail & Related papers (2022-09-20T07:04:24Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - The Need for Interpretable Features: Motivation and Taxonomy [69.07189753428553]
We claim that the term "interpretable feature" is not specific nor detailed enough to capture the full extent to which features impact the usefulness of machine learning explanations.
In this paper, we motivate and discuss three key lessons: 1) more attention should be given to what we refer to as the interpretable feature space, or the state of features that are useful to domain experts taking real-world actions.
arXiv Detail & Related papers (2022-02-23T19:19:14Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Explainable Image Classification with Evidence Counterfactual [0.0]
We introduce SEDC as a model-agnostic instance-level explanation method for image classification.
For a given image, SEDC searches a small set of segments that, in case of removal, alters the classification.
We compare SEDC(-T) with popular feature importance methods such as LRP, LIME and SHAP, and we describe how the mentioned importance ranking issues are addressed.
arXiv Detail & Related papers (2020-04-16T08:02:48Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.