A systematic review and taxonomy of explanations in decision support and
recommender systems
- URL: http://arxiv.org/abs/2006.08672v1
- Date: Mon, 15 Jun 2020 18:19:20 GMT
- Title: A systematic review and taxonomy of explanations in decision support and
recommender systems
- Authors: Ingrid Nunes and Dietmar Jannach
- Abstract summary: We systematically review the literature on explanations in advice-giving systems.
We derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities.
- Score: 13.224071661974596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the recent advances in the field of artificial intelligence, an
increasing number of decision-making tasks are delegated to software systems. A
key requirement for the success and adoption of such systems is that users must
trust system choices or even fully automated decisions. To achieve this,
explanation facilities have been widely investigated as a means of establishing
trust in these systems since the early years of expert systems. With today's
increasingly sophisticated machine learning algorithms, new challenges in the
context of explanations, accountability, and trust towards such systems
constantly arise. In this work, we systematically review the literature on
explanations in advice-giving systems. This is a family of systems that
includes recommender systems, which is one of the most successful classes of
advice-giving software in practice. We investigate the purposes of explanations
as well as how they are generated, presented to users, and evaluated. As a
result, we derive a novel comprehensive taxonomy of aspects to be considered
when designing explanation facilities for current and future decision support
systems. The taxonomy includes a variety of different facets, such as
explanation objective, responsiveness, content and presentation. Moreover, we
identified several challenges that remain unaddressed so far, for example
related to fine-grained issues associated with the presentation of explanations
and how explanation facilities are evaluated.
Related papers
- Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Video Surveillance System Incorporating Expert Decision-making Process:
A Case Study on Detecting Calving Signs in Cattle [5.80793470875286]
In this study, we examine the framework of a video surveillance AI system that presents the reasoning behind predictions by incorporating experts' decision-making processes with rich domain knowledge of the notification target.
In our case study, we designed a system for detecting signs of calving in cattle based on the proposed framework and evaluated the system through a user study with people involved in livestock farming.
arXiv Detail & Related papers (2023-01-10T12:06:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Learning Classifier Systems for Self-Explaining Socio-Technical-Systems [0.0]
In socio-technical settings, operators are increasingly assisted by decision support systems.
To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions.
We present a template of seven questions to assess application-specific explainability needs and demonstrate their usage in an interview-based case study for a manufacturing scenario.
arXiv Detail & Related papers (2022-07-01T08:13:34Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - Fairness On The Ground: Applying Algorithmic Fairness Approaches to
Production Systems [4.288137349392433]
This paper presents an example of applying algorithmic fairness approaches to complex production systems within the context of a large technology company.
We discuss how we disentangle normative questions of product and policy design from empirical questions of system implementation.
We also present an approach for answering questions of the latter sort, which allows us to measure how machine learning systems and human labelers are making these tradeoffs.
arXiv Detail & Related papers (2021-03-10T16:42:20Z) - Recommender Systems for Configuration Knowledge Engineering [55.41644538483948]
We show how recommender systems can support knowledge base development and maintenance processes.
We report the results of empirical studies which show the importance of user-centered configuration knowledge organization.
arXiv Detail & Related papers (2021-02-16T12:29:54Z) - An Overview of Recommender Systems and Machine Learning in Feature
Modeling and Configuration [55.67505546330206]
We give an overview of a potential new line of research which is related to the application of recommender systems and machine learning techniques.
In this paper, we give examples of the application of recommender systems and machine learning and discuss future research issues.
arXiv Detail & Related papers (2021-02-12T17:21:36Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - Foundations of Explainable Knowledge-Enabled Systems [3.7250420821969827]
We present a historical overview of explainable artificial intelligence systems.
We focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
We propose new definitions for explanations and explainable knowledge-enabled systems.
arXiv Detail & Related papers (2020-03-17T04:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.