How to Raise a Robot -- A Case for Neuro-Symbolic AI in Constrained Task
Planning for Humanoid Assistive Robots
- URL: http://arxiv.org/abs/2312.08820v3
- Date: Wed, 27 Dec 2023 19:38:11 GMT
- Title: How to Raise a Robot -- A Case for Neuro-Symbolic AI in Constrained Task
Planning for Humanoid Assistive Robots
- Authors: Niklas Hemken, Florian Jacob, Fabian Peller-Konrad, Rainer Kartmann,
Tamim Asfour, Hannes Hartenstein
- Abstract summary: We explore the novel field of incorporating privacy, security, and access control constraints with robot task planning approaches.
We report preliminary results on the classical symbolic approach, deep-learned neural networks, and modern ideas using large language models as knowledge base.
- Score: 4.286794014747407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humanoid robots will be able to assist humans in their daily life, in
particular due to their versatile action capabilities. However, while these
robots need a certain degree of autonomy to learn and explore, they also should
respect various constraints, for access control and beyond. We explore the
novel field of incorporating privacy, security, and access control constraints
with robot task planning approaches. We report preliminary results on the
classical symbolic approach, deep-learned neural networks, and modern ideas
using large language models as knowledge base. From analyzing their trade-offs,
we conclude that a hybrid approach is necessary, and thereby present a new use
case for the emerging field of neuro-symbolic artificial intelligence.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making [9.002659157558645]
We introduce a trustworthy explainable robotics technique based on human-interpretable, high-level concepts.
Our proposed technique provides explanations with associated uncertainty scores by matching neural network's activations with human-interpretable visualizations.
arXiv Detail & Related papers (2024-09-16T21:11:12Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack [7.253801704452419]
Recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics.
This paper will discuss how we can enable embodied neuromorphic AI for robotic systems through our perspectives.
arXiv Detail & Related papers (2024-04-04T09:52:22Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Growing from Exploration: A self-exploring framework for robots based on
foundation models [13.250831101705694]
We propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention.
Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks.
arXiv Detail & Related papers (2024-01-24T14:04:08Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Exploring AI-enhanced Shared Control for an Assistive Robotic Arm [4.999814847776098]
In particular, we explore how Artifical Intelligence (AI) can be integrated into a shared control paradigm.
In particular, we focus on the consequential requirements for the interface between human and robot.
arXiv Detail & Related papers (2023-06-23T14:19:56Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.