Enabling the Sense of Self in a Dual-Arm Robot
- URL: http://arxiv.org/abs/2011.07026v1
- Date: Fri, 13 Nov 2020 17:25:07 GMT
- Title: Enabling the Sense of Self in a Dual-Arm Robot
- Authors: Ali AlQallaf, Gerardo Aragon-Camarasa
- Abstract summary: We present a neural network architecture that enables a dual-arm robot to get a sense of itself in an environment.
We demonstrate experimentally that a robot can distinguish itself with an accuracy of 88.7% on average in cluttered environmental settings.
- Score: 2.741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While humans are aware of their body and capabilities, robots are not. To
address this, we present in this paper a neural network architecture that
enables a dual-arm robot to get a sense of itself in an environment. Our
approach is inspired by human self-awareness developmental levels and serves as
the underlying building block for a robot to achieve awareness of itself while
carrying out tasks in an environment. We assume that a robot has to know itself
before interacting with the environment in order to be able to support
different robotic tasks. Hence, we implemented a neural network architecture to
enable a robot to differentiate its limbs from the environment using visual and
proprioception sensory inputs. We demonstrate experimentally that a robot can
distinguish itself with an accuracy of 88.7% on average in cluttered
environmental settings and under confounding input signals.
Related papers
- Growing from Exploration: A self-exploring framework for robots based on
foundation models [13.250831101705694]
We propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention.
Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks.
arXiv Detail & Related papers (2024-01-24T14:04:08Z) - Correspondence learning between morphologically different robots via
task demonstrations [2.1374208474242815]
We propose a method to learn correspondences among two or more robots that may have different morphologies.
A fixed-based manipulator robot with joint control and a differential drive mobile robot can be addressed within the proposed framework.
We provide a proof-of-the-concept realization of correspondence learning between a real manipulator robot and a simulated mobile robot.
arXiv Detail & Related papers (2023-10-20T12:42:06Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Design and Development of Autonomous Delivery Robot [0.16863755729554888]
We present an autonomous mobile robot platform that delivers the package within the VNIT campus without any human intercommunication.
The entire pipeline of an autonomous robot working in outdoor environments is explained in this thesis.
arXiv Detail & Related papers (2021-03-16T17:57:44Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.