Explainability in Music Recommender Systems
- URL: http://arxiv.org/abs/2201.10528v1
- Date: Tue, 25 Jan 2022 18:32:11 GMT
- Title: Explainability in Music Recommender Systems
- Authors: Darius Afchar, Alessandro B. Melchiorre, Markus Schedl, Romain
Hennequin, Elena V. Epure, Manuel Moussallam
- Abstract summary: We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
- Score: 69.0506502017444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The most common way to listen to recorded music nowadays is via streaming
platforms which provide access to tens of millions of tracks. To assist users
in effectively browsing these large catalogs, the integration of Music
Recommender Systems (MRSs) has become essential. Current real-world MRSs are
often quite complex and optimized for recommendation accuracy. They combine
several building blocks based on collaborative filtering and content-based
recommendation. This complexity can hinder the ability to explain
recommendations to end users, which is particularly important for
recommendations perceived as unexpected or inappropriate. While pure
recommendation performance often correlates with user satisfaction,
explainability has a positive impact on other factors such as trust and
forgiveness, which are ultimately essential to maintain user loyalty.
In this article, we discuss how explainability can be addressed in the
context of MRSs. We provide perspectives on how explainability could improve
music recommendation algorithms and enhance user experience. First, we review
common dimensions and goals of recommenders' explainability and in general of
eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which
these apply -- or need to be adapted -- to the specific characteristics of
music consumption and recommendation. Then, we show how explainability
components can be integrated within a MRS and in what form explanations can be
provided. Since the evaluation of explanation quality is decoupled from pure
accuracy-based evaluation criteria, we also discuss requirements and strategies
for evaluating explanations of music recommendations. Finally, we describe the
current challenges for introducing explainability within a large-scale
industrial music recommender system and provide research perspectives.
Related papers
- User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - DRIFT: A Federated Recommender System with Implicit Feedback on the
Items [0.0]
DRIFT is a federated architecture for recommender systems, using implicit feedback.
Our learning model is based on a recent algorithm for recommendation with implicit feedbacks SAROS.
Our algorithm is secure, and participants in our federated system cannot guess the interactions made by the user.
arXiv Detail & Related papers (2023-04-17T13:12:33Z) - Multimodal Recommender Systems: A Survey [50.23505070348051]
Multimodal Recommender System (MRS) has attracted much attention from both academia and industry recently.
In this paper, we will give a comprehensive survey of the MRS models, mainly from technical views.
To access more details of the surveyed papers, such as implementation code, we open source a repository.
arXiv Detail & Related papers (2023-02-08T05:12:54Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Learning to Ask Appropriate Questions in Conversational Recommendation [49.31942688227828]
We propose the Knowledge-Based Question Generation System (KBQG), a novel framework for conversational recommendation.
KBQG models a user's preference in a finer granularity by identifying the most relevant relations from a structured knowledge graph.
Finially, accurate recommendations can be generated in fewer conversational turns.
arXiv Detail & Related papers (2021-05-11T03:58:10Z) - ELIXIR: Learning from User Feedback on Explanations to Improve
Recommender Models [26.11434743591804]
We devise a human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.
ELIXIR leverages feedback on pairs of recommendations and explanations to learn user-specific latent preference vectors.
Our framework is instantiated using generalized graph recommendation via Random Walk with Restart.
arXiv Detail & Related papers (2021-02-15T13:43:49Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Interacting with Explanations through Critiquing [40.69540222716043]
We present a technique that learns to generate personalized explanations of recommendations from review texts.
We show that human users significantly prefer these explanations over those produced by state-of-the-art techniques.
Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation.
arXiv Detail & Related papers (2020-05-22T09:03:06Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.