Self-recognition in conversational agents
- URL: http://arxiv.org/abs/2002.02334v3
- Date: Sun, 5 Sep 2021 11:04:19 GMT
- Title: Self-recognition in conversational agents
- Authors: Yigit Oktar, Erdem Okur, Mehmet Turkan
- Abstract summary: Sustaining the idea of self within the Turing test is still possible if the judge decides to act as a textual mirror.
It is possible that a successful self-recognition might pave way to stronger notions of self-awareness in artificial beings.
- Score: 0.5156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a standard Turing test, a machine has to prove its humanness to the
judges. By successfully imitating a thinking entity such as a human, this
machine then proves that it can also think. Some objections claim that Turing
test is not a tool to demonstrate the existence of general intelligence or
thinking activity. A compelling alternative is the Lovelace test, in which the
agent must originate a product that the agent's creator cannot explain.
Therefore, the agent must be the owner of an original product. However, for
this to happen the agent must exhibit the idea of self and distinguish oneself
from others. Sustaining the idea of self within the Turing test is still
possible if the judge decides to act as a textual mirror. Self-recognition
tests applied on animals through mirrors appear to be viable tools to
demonstrate the existence of a type of general intelligence. Methodology here
constructs a textual version of the mirror test by placing the agent as the one
and only judge to figure out whether the contacted one is an other, a mimicker,
or oneself in an unsupervised manner. This textual version of the mirror test
is objective, self-contained, and devoid of humanness. Any agent passing this
textual mirror test should have or can acquire a thought mechanism that can be
referred to as the inner-voice, answering the original and long lasting
question of Turing "Can machines think?" in a constructive manner still within
the bounds of the Turing test. Moreover, it is possible that a successful
self-recognition might pave way to stronger notions of self-awareness in
artificial beings.
Related papers
- The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - The Human-or-Machine Matter: Turing-Inspired Reflections on an Everyday
Issue [4.309879785418976]
We sidestep the question of whether a machine can be labeled intelligent, or can be said to match human capabilities in a given context.
We first draw attention to the seemingly simpler question a person may ask themselves in an everyday interaction: Am I interacting with a human or with a machine?''
arXiv Detail & Related papers (2023-05-07T15:41:11Z) - Can Machines Imitate Humans? Integrative Turing Tests for Vision and Language Demonstrate a Narrowing Gap [45.6806234490428]
We benchmark current AIs in their abilities to imitate humans in three language tasks and three vision tasks.
Experiments involved 549 human agents plus 26 AI agents for dataset creation, and 1,126 human judges plus 10 AI judges.
Results reveal that current AIs are not far from being able to impersonate humans in complex language and vision challenges.
arXiv Detail & Related papers (2022-11-23T16:16:52Z) - Sequential Causal Imitation Learning with Unobserved Confounders [82.22545916247269]
"Monkey see monkey do" is an age-old adage, referring to na"ive imitation without a deep understanding of a system's underlying mechanics.
This paper investigates the problem of causal imitation learning in sequential settings, where the imitator must make multiple decisions per episode.
arXiv Detail & Related papers (2022-08-12T13:53:23Z) - The Meta-Turing Test [17.68987003293372]
We propose an alternative to the Turing test that removes the inherent asymmetry between humans and machines.
In this new test, both humans and machines judge each other.
arXiv Detail & Related papers (2022-05-11T04:54:14Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Robot in the mirror: toward an embodied computational model of mirror
self-recognition [1.9686770963118383]
The mirror self-recognition test consists in covertly putting a mark on the face of the tested subject, placing her in front of a mirror, and observing the reactions.
In this work, we provide a mechanistic decomposition, or process model, of what components are required to pass this test.
We develop a model to enable the humanoid robot Nao to pass the test.
arXiv Detail & Related papers (2020-11-09T15:11:31Z) - Robot self/other distinction: active inference meets neural networks
learning in a mirror [9.398766540452632]
We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror.
The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning.
Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions.
arXiv Detail & Related papers (2020-04-11T19:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.