An Attention Transfer Model for Human-Assisted Failure Avoidance in
Robot Manipulations
- URL: http://arxiv.org/abs/2002.04242v3
- Date: Tue, 29 Jun 2021 14:52:43 GMT
- Title: An Attention Transfer Model for Human-Assisted Failure Avoidance in
Robot Manipulations
- Authors: Boyi Song, Yuntao Peng, Ruijiao Luo, Rui Liu
- Abstract summary: A novel human-to-robot attention transfer (textittextbfH2R-AT) method was developed to identify robot manipulation errors.
textittextbfH2R-AT was developed by fusing attention mapping mechanism into a novel stacked neural networks model.
The method effectiveness was validated by the high accuracy of $73.68%$ in transferring attention, and the high accuracy of $66.86%$ in avoiding grasping failures.
- Score: 2.745883395089022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to real-world dynamics and hardware uncertainty, robots inevitably fail
in task executions, resulting in undesired or even dangerous executions. In
order to avoid failures and improve robot performance, it is critical to
identify and correct abnormal robot executions at an early stage. However, due
to limited reasoning capability and knowledge storage, it is challenging for
robots to self-diagnose and -correct their own abnormality in both planning and
executing. To improve robot self diagnosis capability, in this research a novel
human-to-robot attention transfer (\textit{\textbf{H2R-AT}}) method was
developed to identify robot manipulation errors by leveraging human
instructions. \textit{\textbf{H2R-AT}} was developed by fusing attention
mapping mechanism into a novel stacked neural networks model, transferring
human verbal attention into robot visual attention. With the attention
transfer, a robot understands \textit{what} and \textit{where} human concerns
are to identify and correct abnormal manipulations. Two representative task
scenarios: ``serve water for a human in a kitchen" and ``pick up a defective
gear in a factory" were designed in a simulation framework CRAIhri with
abnormal robot manipulations; and $252$ volunteers were recruited to provide
about 12000 verbal reminders to learn and test \textit{\textbf{H2R-AT}}. The
method effectiveness was validated by the high accuracy of $73.68\%$ in
transferring attention, and the high accuracy of $66.86\%$ in avoiding grasping
failures.
Related papers
- Know your limits! Optimize the robot's behavior through self-awareness [11.021217430606042]
Recent human-robot imitation algorithms focus on following a reference human motion with high precision.
We introduce a deep-learning model that anticipates the robot's performance when imitating a given reference.
Our Self-AWare model (SAW) ranks potential robot behaviors based on various criteria, such as fall likelihood, adherence to the reference motion, and smoothness.
arXiv Detail & Related papers (2024-09-16T14:14:58Z) - LLM Granularity for On-the-Fly Robot Control [3.5015824313818578]
In circumstances where visuals become unreliable or unavailable, can we rely solely on language to control robots?
This work takes the initial steps to answer this question by: 1) evaluating the responses of assistive robots to language prompts of varying granularities; and 2) exploring the necessity and feasibility of controlling the robot on-the-fly.
arXiv Detail & Related papers (2024-06-20T18:17:48Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Robot Vitals and Robot Health: Towards Systematically Quantifying
Runtime Performance Degradation in Robots Under Adverse Conditions [2.0625936401496237]
"Robot vitals" are indicators that estimate the extent of performance degradation faced by a robot.
"Robot health" is a metric that combines robot vitals into a single scalar value estimate of performance degradation.
arXiv Detail & Related papers (2022-07-04T19:26:13Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Fault-Aware Robust Control via Adversarial Reinforcement Learning [35.16413579212691]
We propose an adversarial reinforcement learning framework, which significantly increases robot fragility over joint damage cases.
We validate our algorithm on a three-fingered robot hand and a quadruped robot.
Our algorithm can be trained only in simulation and directly deployed on a real robot without any fine-tuning.
arXiv Detail & Related papers (2020-11-17T16:01:06Z) - Quantifying Hypothesis Space Misspecification in Learning from
Human-Robot Demonstrations and Physical Corrections [34.53709602861176]
Recent work focuses on how robots can use such input to learn intended objectives.
We demonstrate our method on a 7 degree-of-freedom robot manipulator in learning from two important types of human input.
arXiv Detail & Related papers (2020-02-03T18:59:23Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.