Do you comply with AI? -- Personalized explanations of learning
algorithms and their impact on employees' compliance behavior
- URL: http://arxiv.org/abs/2002.08777v1
- Date: Thu, 20 Feb 2020 14:55:20 GMT
- Title: Do you comply with AI? -- Personalized explanations of learning
algorithms and their impact on employees' compliance behavior
- Authors: NIklas Kuhl, Jodie Lobana, and Christian Meske
- Abstract summary: Personalization of AI explanations may be an instrument to affect compliance and therefore employee task performance.
Our preliminary results already indicate the importance of personalized explanations in industry settings.
- Score: 0.11470070927586014
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine Learning algorithms are technological key enablers for artificial
intelligence (AI). Due to the inherent complexity, these learning algorithms
represent black boxes and are difficult to comprehend, therefore influencing
compliance behavior. Hence, compliance with the recommendations of such
artifacts, which can impact employees' task performance significantly, is still
subject to research - and personalization of AI explanations seems to be a
promising concept in this regard. In our work, we hypothesize that, based on
varying backgrounds like training, domain knowledge and demographic
characteristics, individuals have different understandings and hence mental
models about the learning algorithm. Personalization of AI explanations,
related to the individuals' mental models, may thus be an instrument to affect
compliance and therefore employee task performance. Our preliminary results
already indicate the importance of personalized explanations in industry
settings and emphasize the importance of this research endeavor.
Related papers
- An Information Bottleneck Characterization of the Understanding-Workload
Tradeoff [15.90243405031747]
Consideration of human factors that impact explanation efficacy is central to explainable AI (XAI) design.
Existing work in XAI has demonstrated a tradeoff between understanding and workload induced by different types of explanations.
arXiv Detail & Related papers (2023-10-11T18:35:26Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Ethical behavior in humans and machines -- Evaluating training data
quality for beneficial machine learning [0.0]
This study describes new dimensions of data quality for supervised machine learning applications.
The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from.
arXiv Detail & Related papers (2020-08-26T09:48:38Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.