A design of human-like robust AI machines in object identification
- URL: http://arxiv.org/abs/2101.02327v1
- Date: Thu, 7 Jan 2021 02:11:45 GMT
- Title: A design of human-like robust AI machines in object identification
- Authors: Bao-Gang Hu and Wei-Ming Dong
- Abstract summary: We define human-like robustness (HLR) for AI machines.
HLR aims to enforce AI machines with HLR, including to evaluate them in terms of HLR.
Similar to the perspective, or design, position by Turing, we provide a solution of how to achieve HLR AI machines.
- Score: 22.725436010277516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is a perspective paper inspired from the study of Turing Test proposed
by A.M. Turing (23 June 1912 - 7 June 1954) in 1950. Following one important
implication of Turing Test for enabling a machine with a human-like behavior or
performance, we define human-like robustness (HLR) for AI machines. The
objective of the new definition aims to enforce AI machines with HLR, including
to evaluate them in terms of HLR. A specific task is discussed only on object
identification, because it is the most common task for every person in daily
life. Similar to the perspective, or design, position by Turing, we provide a
solution of how to achieve HLR AI machines without constructing them and
conducting real experiments. The solution should consists of three important
features in the machines. The first feature of HLR machines is to utilize
common sense from humans for realizing a causal inference. The second feature
is to make a decision from a semantic space for having interpretations to the
decision. The third feature is to include a "human-in-the-loop" setting for
advancing HLR machines. We show an "identification game" using proposed design
of HLR machines. The present paper shows an attempt to learn and explore
further from Turing Test towards the design of human-like AI machines.
Related papers
- What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - PROGrasp: Pragmatic Human-Robot Communication for Object Grasping [22.182690439449278]
Interactive Object Grasping (IOG) is the task of identifying and grasping the desired object via human-robot natural language interaction.
Inspired by pragmatics, we introduce a new IOG task, Pragmatic-IOG, and the corresponding dataset, Intention-oriented Multi-modal Dialogue (IM-Dial)
Prograsp performs Pragmatic-IOG by incorporating modules for visual grounding, question asking, object grasping, and most importantly, answer interpretation for pragmatic inference.
arXiv Detail & Related papers (2023-09-14T14:45:47Z) - Can I say, now machines can think? [0.0]
We analyzed and explored the capabilities of artificial intelligence-enabled machines.
Turing Test is a critical aspect of evaluating machines' ability.
There are other aspects of intelligence too, and AI machines exhibit most of these aspects.
arXiv Detail & Related papers (2023-07-11T11:44:09Z) - The Human-or-Machine Matter: Turing-Inspired Reflections on an Everyday
Issue [4.309879785418976]
We sidestep the question of whether a machine can be labeled intelligent, or can be said to match human capabilities in a given context.
We first draw attention to the seemingly simpler question a person may ask themselves in an everyday interaction: Am I interacting with a human or with a machine?''
arXiv Detail & Related papers (2023-05-07T15:41:11Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Can Machines Imitate Humans? Integrative Turing Tests for Vision and Language Demonstrate a Narrowing Gap [45.6806234490428]
We benchmark current AIs in their abilities to imitate humans in three language tasks and three vision tasks.
Experiments involved 549 human agents plus 26 AI agents for dataset creation, and 1,126 human judges plus 10 AI judges.
Results reveal that current AIs are not far from being able to impersonate humans in complex language and vision challenges.
arXiv Detail & Related papers (2022-11-23T16:16:52Z) - Can Foundation Models Perform Zero-Shot Task Specification For Robot
Manipulation? [54.442692221567796]
Task specification is critical for engagement of non-expert end-users and adoption of personalized robots.
A widely studied approach to task specification is through goals, using either compact state vectors or goal images from the same robot scene.
In this work, we explore alternate and more general forms of goal specification that are expected to be easier for humans to specify and use.
arXiv Detail & Related papers (2022-04-23T19:39:49Z) - HAKE: A Knowledge Engine Foundation for Human Activity Understanding [65.24064718649046]
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.
We propose a novel paradigm to reformulate this task in two stages: first mapping pixels to an intermediate space spanned by atomic activity primitives, then programming detected primitives with interpretable logic rules to infer semantics.
Our framework, the Human Activity Knowledge Engine (HAKE), exhibits superior generalization ability and performance upon challenging benchmarks.
arXiv Detail & Related papers (2022-02-14T16:38:31Z) - Who is this Explanation for? Human Intelligence and Knowledge Graphs for
eXplainable AI [0.0]
We focus on the contributions that Human Intelligence can bring to eXplainable AI.
We call for a better interplay between Knowledge Representation and Reasoning, Social Sciences, Human Computation and Human-Machine Cooperation research.
arXiv Detail & Related papers (2020-05-27T10:47:15Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.