Evaluating graph-based explanations for AI-based recommender systems
- URL: http://arxiv.org/abs/2407.12357v1
- Date: Wed, 17 Jul 2024 07:28:49 GMT
- Title: Evaluating graph-based explanations for AI-based recommender systems
- Authors: Simon Delarue, Astrid Bertrand, Tiphaine Viard,
- Abstract summary: We study the effectiveness of graph-based explanations in improving users' perception of AI-based recommendations.
We find that users perceive graph-based explanations as more usable than designs involving feature importance.
- Score: 1.2499537119440245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed a rapid growth of recommender systems, providing suggestions in numerous applications with potentially high social impact, such as health or justice. Meanwhile, in Europe, the upcoming AI Act mentions \emph{transparency} as a requirement for critical AI systems in order to ``mitigate the risks to fundamental rights''. Post-hoc explanations seamlessly align with this goal and extensive literature on the subject produced several forms of such objects, graphs being one of them. Early studies in visualization demonstrated the graphs' ability to improve user understanding, positioning them as potentially ideal explanations. However, it remains unclear how graph-based explanations compare to other explanation designs. In this work, we aim to determine the effectiveness of graph-based explanations in improving users' perception of AI-based recommendations using a mixed-methods approach. We first conduct a qualitative study to collect users' requirements for graph explanations. We then run a larger quantitative study in which we evaluate the influence of various explanation designs, including enhanced graph-based ones, on aspects such as understanding, usability and curiosity toward the AI system. We find that users perceive graph-based explanations as more usable than designs involving feature importance. However, we also reveal that textual explanations lead to higher objective understanding than graph-based designs. Most importantly, we highlight the strong contrast between participants' expressed preferences for graph design and their actual ratings using it, which are lower compared to textual design. These findings imply that meeting stakeholders' expressed preferences might not alone guarantee ``good'' explanations. Therefore, crafting hybrid designs successfully balancing social expectations with downstream performance emerges as a significant challenge.
Related papers
- Narrating Causal Graphs with Large Language Models [1.437446768735628]
This work explores the capability of large pretrained language models to generate text from causal graphs.
The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing.
Results suggest users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples.
arXiv Detail & Related papers (2024-03-11T19:19:59Z) - Explanation Graph Generation via Generative Pre-training over Synthetic
Graphs [6.25568933262682]
The generation of explanation graphs is a significant task that aims to produce explanation graphs in response to user input.
Current research commonly fine-tunes a text-based pre-trained language model on a small downstream dataset that is annotated with labeled graphs.
We propose a novel pre-trained framework EG3P(for Explanation Graph Generation via Generative Pre-training over synthetic graphs) for the explanation graph generation task.
arXiv Detail & Related papers (2023-06-01T13:20:22Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Graphing else matters: exploiting aspect opinions and ratings in
explainable graph-based recommendations [66.83527496838937]
We propose to exploit embeddings extracted from graphs that combine information from ratings and aspect-based opinions expressed in textual reviews.
We then adapt and evaluate state-of-the-art graph embedding techniques over graphs generated from Amazon and Yelp reviews on six domains.
Our approach has the advantage of providing explanations which leverage aspect-based opinions given by users about recommended items.
arXiv Detail & Related papers (2021-07-07T13:57:28Z) - Graph Learning based Recommender Systems: A Review [111.43249652335555]
Graph Learning based Recommender Systems (GLRS) employ advanced graph learning approaches to model users' preferences and intentions as well as items' characteristics for recommendations.
We provide a systematic review of GLRS, by discussing how they extract important knowledge from graph-based representations to improve the accuracy, reliability and explainability of the recommendations.
arXiv Detail & Related papers (2021-05-13T14:50:45Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - EAGER: Embedding-Assisted Entity Resolution for Knowledge Graphs [4.363674669941547]
We propose a more comprehensive ER approach for knowledge graphs called EAGER (Embedding-Assisted Knowledge Graph Entity Resolution)
We evaluate our approach on 23 benchmark datasets with differently sized and structured knowledge graphs.
arXiv Detail & Related papers (2021-01-15T14:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.