The Influence of Explainable Artificial Intelligence: Nudging Behaviour
or Boosting Capability?
- URL: http://arxiv.org/abs/2210.02407v1
- Date: Wed, 5 Oct 2022 17:28:52 GMT
- Title: The Influence of Explainable Artificial Intelligence: Nudging Behaviour
or Boosting Capability?
- Authors: Matija Franklin
- Abstract summary: This article aims to provide a theoretical account and corresponding paradigm for analysing how explainable artificial intelligence (XAI) influences people's behaviour and cognition.
Two notable frameworks for thinking about behaviour change techniques are nudges - aimed at influencing behaviour - and boosts - aimed at fostering capability.
It outlines a method for measuring XAI influence and argues for the benefits of understanding it for optimal, safe and ethical human-AI collaboration.
- Score: 0.456877715768796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article aims to provide a theoretical account and corresponding paradigm
for analysing how explainable artificial intelligence (XAI) influences people's
behaviour and cognition. It uses insights from research on behaviour change.
Two notable frameworks for thinking about behaviour change techniques are
nudges - aimed at influencing behaviour - and boosts - aimed at fostering
capability. It proposes that local and concept-based explanations are more
adjacent to nudges, while global and counterfactual explanations are more
adjacent to boosts. It outlines a method for measuring XAI influence and argues
for the benefits of understanding it for optimal, safe and ethical human-AI
collaboration.
Related papers
- Bayesian Reinforcement Learning with Limited Cognitive Load [43.19983737333797]
Theory of adaptive behavior should account for complex interactions between an agent's learning history, decisions, and capacity constraints.
Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory.
arXiv Detail & Related papers (2023-05-05T03:29:34Z) - Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study [0.0]
This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
arXiv Detail & Related papers (2023-04-18T09:52:09Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Towards a Shapley Value Graph Framework for Medical peer-influence [0.9449650062296824]
This paper introduces a new framework to look deeper into explanations using graph representation for feature-to-feature interactions.
It aims to improve the interpretability of black-box Machine Learning (ML) models and inform intervention.
arXiv Detail & Related papers (2021-12-29T16:24:50Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Cognitive Perspectives on Context-based Decisions and Explanations [0.0]
We show that the Contextual Importance and Utility method for XAI share an overlap with the current new wave of action-oriented predictive representational structures.
This has an influencing effect on explainable AI, where the goal is to provide explanations of computer decision-making for a human audience.
arXiv Detail & Related papers (2021-01-25T15:49:52Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.