Adaptive User-centered Neuro-symbolic Learning for Multimodal
Interaction with Autonomous Systems
- URL: http://arxiv.org/abs/2309.05787v1
- Date: Mon, 11 Sep 2023 19:35:12 GMT
- Title: Adaptive User-centered Neuro-symbolic Learning for Multimodal
Interaction with Autonomous Systems
- Authors: Amr Gomaa, Michael Feld
- Abstract summary: Recent advances in machine learning have enabled autonomous systems to perceive and comprehend objects.
It is essential to consider both the explicit teaching provided by humans and the implicit teaching obtained by observing human behavior.
We argue for considering both types of inputs, as well as human-in-the-loop and incremental learning techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in machine learning, particularly deep learning, have enabled
autonomous systems to perceive and comprehend objects and their environments in
a perceptual subsymbolic manner. These systems can now perform object
detection, sensor data fusion, and language understanding tasks. However, there
is a growing need to enhance these systems to understand objects and their
environments more conceptually and symbolically. It is essential to consider
both the explicit teaching provided by humans (e.g., describing a situation or
explaining how to act) and the implicit teaching obtained by observing human
behavior (e.g., through the system's sensors) to achieve this level of powerful
artificial intelligence. Thus, the system must be designed with multimodal
input and output capabilities to support implicit and explicit interaction
models. In this position paper, we argue for considering both types of inputs,
as well as human-in-the-loop and incremental learning techniques, for advancing
the field of artificial intelligence and enabling autonomous systems to learn
like humans. We propose several hypotheses and design guidelines and highlight
a use case from related work to achieve this goal.
Related papers
- Neurosymbolic Value-Inspired AI (Why, What, and How) [8.946847190099206]
We propose a neurosymbolic computational framework called Value-Inspired AI (VAI)
VAI aims to represent and integrate various dimensions of human values.
We offer insights into the current progress made in this direction and outline potential future directions for the field.
arXiv Detail & Related papers (2023-12-15T16:33:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Neurosymbolic AI - Why, What, and How [9.551858963199987]
Humans interact with the environment using a combination of perception and cognition.
On the other hand, machine cognition encompasses more complex computations.
This article introduces the rapidly emerging paradigm of Neurosymbolic AI.
arXiv Detail & Related papers (2023-05-01T13:27:22Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - The future of human-AI collaboration: a taxonomy of design knowledge for
hybrid intelligence systems [0.0]
We identify the need for developing socio-technological ensembles of humans and machines.
We present a structured overview of interdisciplinary research on the role of humans in the machine learning pipeline.
Second, we envision hybrid intelligence systems and conceptualize the relevant dimensions for system design.
arXiv Detail & Related papers (2021-05-07T16:10:44Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Perspectives and Ethics of the Autonomous Artificial Thinking Systems [0.0]
Our model uses four hierarchies: the hierarchy of information systems, the cognitive hierarchy, the linguistic hierarchy and the digital informative hierarchy.
The question of the capability of autonomous system to provide a form of artificial thought arises with the ethical consequences on the social life and the perspective of transhumanism.
arXiv Detail & Related papers (2020-01-13T14:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.