Explaining Recommendation System Using Counterfactual Textual
Explanations
- URL: http://arxiv.org/abs/2303.11160v2
- Date: Thu, 1 Jun 2023 07:40:26 GMT
- Title: Explaining Recommendation System Using Counterfactual Textual
Explanations
- Authors: Niloofar Ranjbar and Saeedeh Momtazi and MohammadMehdi Homayounpour
- Abstract summary: It is found that if end-users understand the reason for the production of some output, it is easier to trust the system.
One method for producing a more explainable output is using counterfactual reasoning.
- Score: 4.318555434063274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, there is a significant amount of research being conducted in the
field of artificial intelligence to improve the explainability and
interpretability of deep learning models. It is found that if end-users
understand the reason for the production of some output, it is easier to trust
the system. Recommender systems are one example of systems that great efforts
have been conducted to make their output more explainable. One method for
producing a more explainable output is using counterfactual reasoning, which
involves altering minimal features to generate a counterfactual item that
results in changing the output of the system. This process allows the
identification of input features that have a significant impact on the desired
output, leading to effective explanations. In this paper, we present a method
for generating counterfactual explanations for both tabular and textual
features. We evaluated the performance of our proposed method on three
real-world datasets and demonstrated a +5\% improvement on finding effective
features (based on model-based measures) compared to the baseline method.
Related papers
- Likelihood as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that likelihoods serve as an effective gauge for language model performance.
We propose two methods that use question likelihood as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - An AI Architecture with the Capability to Explain Recognition Results [0.0]
This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains.
The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision.
The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer.
arXiv Detail & Related papers (2024-06-13T02:00:13Z) - Explainability for Machine Learning Models: From Data Adaptability to
User Perception [0.8702432681310401]
This thesis explores the generation of local explanations for already deployed machine learning models.
It aims to identify optimal conditions for producing meaningful explanations considering both data and user requirements.
arXiv Detail & Related papers (2024-02-16T18:44:37Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Robustness and Usefulness in AI Explanation Methods [0.0]
This work summarizes, compares, and contrasts three popular explanation methods: LIME, SmoothGrad, and SHAP.
We evaluate these methods with respect to: robustness, in the sense of sample complexity and stability; understandability, in the sense that provided explanations are consistent with user expectations.
This work concludes that current explanation methods are insufficient; that putting faith in and adopting these methods may actually be worse than simply not using them.
arXiv Detail & Related papers (2022-03-07T21:30:48Z) - DisCERN:Discovering Counterfactual Explanations using Relevance Features
from Neighbourhoods [1.9706200133168679]
DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
We show how widely adopted feature relevance-based explainers can inform DisCERN to identify the minimum subset of "actionable features"
Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
arXiv Detail & Related papers (2021-09-13T09:25:25Z) - Search Methods for Sufficient, Socially-Aligned Feature Importance
Explanations with In-Distribution Counterfactuals [72.00815192668193]
Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time.
We study several under-explored dimensions of FI-based explanations, providing conceptual and empirical improvements for this form of explanation.
arXiv Detail & Related papers (2021-06-01T20:36:48Z) - Explain and Predict, and then Predict Again [6.865156063241553]
We propose ExPred, that uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses.
We conduct an extensive evaluation of our approach on three diverse language datasets.
arXiv Detail & Related papers (2021-01-11T19:36:52Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.