Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition
- URL: http://arxiv.org/abs/2403.00499v2
- Date: Thu, 11 Jul 2024 15:39:31 GMT
- Title: Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition
- Authors: Ariel Goldstein, Gabriel Stanovsky,
- Abstract summary: We argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness.
We ask whether $Z$ is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently.
- Score: 16.00055700916453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment involving an open-source chatbot $Z$ which excels on every possible benchmark, seemingly without subjective experience. We ask whether $Z$ is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently, uncovering their terminological disagreement. Moving forward, we propose two distinct working definitions for understanding which explicitly acknowledge the question of consciousness, and draw connections with a rich literature in philosophy, psychology and neuroscience.
Related papers
- The Cognitive Revolution in Interpretability: From Explaining Behavior to Interpreting Representations and Algorithms [3.3653074379567096]
mechanistic interpretability (MI) has emerged as a distinct research area studying the features and implicit algorithms learned by foundation models such as large language models.
We argue that current methods are ripe to facilitate a transition in deep learning interpretation echoing the "cognitive revolution" in 20th-century psychology.
We propose a taxonomy mirroring key parallels in computational neuroscience to describe two broad categories of MI research.
arXiv Detail & Related papers (2024-08-11T20:50:16Z) - SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - An Essay concerning machine understanding [0.0]
This essay describes how we could go about constructing a machine capable of understanding.
To understand a word is to know and be able to work with the underlying concepts for which it is an indicator.
arXiv Detail & Related papers (2024-05-03T04:12:43Z) - A Review on Objective-Driven Artificial Intelligence [0.0]
Humans have an innate ability to understand context, nuances, and subtle cues in communication.
Humans possess a vast repository of common-sense knowledge that helps us make logical inferences and predictions about the world.
Machines lack this innate understanding and often struggle with making sense of situations that humans find trivial.
arXiv Detail & Related papers (2023-08-20T02:07:42Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Bi-directional Cognitive Thinking Network for Machine Reading
Comprehension [18.690332722963568]
We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for reading comprehension.
It aims to simulate two ways of thinking in the brain to answer questions, including reverse thinking and inertial thinking.
arXiv Detail & Related papers (2020-10-20T13:56:30Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - To Test Machine Comprehension, Start by Defining Comprehension [4.7567975584546]
We argue that existing approaches do not adequately define comprehension.
We present a detailed definition of comprehension for a widely useful class of texts, namely short narratives.
arXiv Detail & Related papers (2020-05-04T14:36:07Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.