Suffering Toasters -- A New Self-Awareness Test for AI
- URL: http://arxiv.org/abs/2306.17258v2
- Date: Fri, 7 Jul 2023 07:00:22 GMT
- Title: Suffering Toasters -- A New Self-Awareness Test for AI
- Authors: Ira Wolfson
- Abstract summary: We argue that all current intelligence tests are insufficient to point to the existence or lack of intelligence.
We propose a new approach to test for artificial self-awareness and outline a possible implementation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A widely accepted definition of intelligence in the context of Artificial
Intelligence (AI) still eludes us. Due to our exceedingly rapid development of
AI paradigms, architectures, and tools, the prospect of naturally arising AI
consciousness seems more likely than ever. In this paper, we claim that all
current intelligence tests are insufficient to point to the existence or lack
of intelligence \textbf{as humans intuitively perceive it}. We draw from ideas
in the philosophy of science, psychology, and other areas of research to
provide a clearer definition of the problems of artificial intelligence,
self-awareness, and agency. We furthermore propose a new heuristic approach to
test for artificial self-awareness and outline a possible implementation.
Finally, we discuss some of the questions that arise from this new heuristic,
be they philosophical or implementation-oriented.
Related papers
- Bio-inspired AI: Integrating Biological Complexity into Artificial Intelligence [0.0]
The pursuit of creating artificial intelligence mirrors our longstanding fascination with understanding our own intelligence.
Recent advances in AI hold promise, but singular approaches often fall short in capturing the essence of intelligence.
This paper explores how fundamental principles from biological computation can guide the design of truly intelligent systems.
arXiv Detail & Related papers (2024-11-22T02:55:39Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Advancing Perception in Artificial Intelligence through Principles of
Cognitive Science [6.637438611344584]
We focus on the cognitive functions of perception, which is the process of taking signals from one's surroundings as input, and processing them to understand the environment.
We present a collection of methods in AI for researchers to build AI systems inspired by cognitive science.
arXiv Detail & Related papers (2023-10-13T01:21:55Z) - Reflective Artificial Intelligence [2.7412662946127755]
Many important qualities that a human mind would have previously brought to the activity are utterly absent in AI.
One core feature that humans bring to tasks is reflection.
Yet this capability is utterly missing from current mainstream AI.
In this paper we ask what reflective AI might look like.
arXiv Detail & Related papers (2023-01-25T20:50:26Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.