Consciousness in Artificial Intelligence: Insights from the Science of
Consciousness
- URL: http://arxiv.org/abs/2308.08708v3
- Date: Tue, 22 Aug 2023 17:33:15 GMT
- Title: Consciousness in Artificial Intelligence: Insights from the Science of
Consciousness
- Authors: Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan
Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji,
Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan
A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin VanRullen
- Abstract summary: This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness.
We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory.
Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
- Score: 31.991243430962054
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Whether current or near-term AI systems could be conscious is a topic of
scientific interest and increasing public concern. This report argues for, and
exemplifies, a rigorous and empirically grounded approach to AI consciousness:
assessing existing AI systems in detail, in light of our best-supported
neuroscientific theories of consciousness. We survey several prominent
scientific theories of consciousness, including recurrent processing theory,
global workspace theory, higher-order theories, predictive processing, and
attention schema theory. From these theories we derive "indicator properties"
of consciousness, elucidated in computational terms that allow us to assess AI
systems for these properties. We use these indicator properties to assess
several recent AI systems, and we discuss how future systems might implement
them. Our analysis suggests that no current AI systems are conscious, but also
suggests that there are no obvious technical barriers to building AI systems
which satisfy these indicators.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - A Case for AI Consciousness: Language Agents and Global Workspace Theory [0.0]
We argue that instances of one widely implemented AI architecture, the artificial language agent, might easily be made phenomenally conscious if they are not already.
Along the way, we articulate an explicit methodology for thinking about how to apply scientific theories of consciousness to artificial systems.
arXiv Detail & Related papers (2024-10-15T08:50:45Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Is artificial consciousness achievable? Lessons from the human brain [0.0]
We analyse the question of developing artificial consciousness from an evolutionary perspective.
We take the evolution of the human brain and its relation with consciousness as a reference model.
We propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
arXiv Detail & Related papers (2024-04-18T12:59:44Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Advancing Perception in Artificial Intelligence through Principles of
Cognitive Science [6.637438611344584]
We focus on the cognitive functions of perception, which is the process of taking signals from one's surroundings as input, and processing them to understand the environment.
We present a collection of methods in AI for researchers to build AI systems inspired by cognitive science.
arXiv Detail & Related papers (2023-10-13T01:21:55Z) - Suffering Toasters -- A New Self-Awareness Test for AI [0.0]
We argue that all current intelligence tests are insufficient to point to the existence or lack of intelligence.
We propose a new approach to test for artificial self-awareness and outline a possible implementation.
arXiv Detail & Related papers (2023-06-29T18:58:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - BIASeD: Bringing Irrationality into Automated System Design [12.754146668390828]
We claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases.
We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.
arXiv Detail & Related papers (2022-10-01T02:52:38Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.