Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making
- URL: http://arxiv.org/abs/2306.04025v1
- Date: Tue, 6 Jun 2023 21:38:09 GMT
- Title: Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making
- Authors: Mahault Albarracin, In\^es Hip\'olito, Safae Essafi Tremblay, Jason G.
Fox, Gabriel Ren\'e, Karl Friston, Maxwell J. D. Ramstead
- Abstract summary: We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the prospect of developing human-interpretable,
explainable artificial intelligence (AI) systems based on active inference and
the free energy principle. We first provide a brief overview of active
inference, and in particular, of how it applies to the modeling of
decision-making, introspection, as well as the generation of overt and covert
actions. We then discuss how active inference can be leveraged to design
explainable AI systems, namely, by allowing us to model core features of
``introspective'' processes and by generating useful, human-interpretable
models of the processes involved in decision-making. We propose an architecture
for explainable AI systems using active inference. This architecture
foregrounds the role of an explicit hierarchical generative model, the
operation of which enables the AI system to track and explain the factors that
contribute to its own decisions, and whose structure is designed to be
interpretable and auditable by human users. We outline how this architecture
can integrate diverse sources of information to make informed decisions in an
auditable manner, mimicking or reproducing aspects of human-like consciousness
and introspection. Finally, we discuss the implications of our findings for
future research in AI, and the potential ethical considerations of developing
AI systems with (the appearance of) introspective capabilities.
Related papers
- A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.
According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making.
This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or ''more modest'') explainability techniques might miss.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.