Towards Disturbance-Free Visual Mobile Manipulation
- URL: http://arxiv.org/abs/2112.12612v1
- Date: Fri, 17 Dec 2021 22:33:23 GMT
- Title: Towards Disturbance-Free Visual Mobile Manipulation
- Authors: Tianwei Ni, Kiana Ehsani, Luca Weihs, Jordi Salvador
- Abstract summary: We develop a new disturbance-avoidance methodology at the heart of which is the auxiliary task of disturbance prediction.
Our experiments on ManipulaTHOR show that, on testing scenes with novel objects, our method improves the success rate from 61.7% to 85.6%.
- Score: 11.738161077441104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embodied AI has shown promising results on an abundance of robotic tasks in
simulation, including visual navigation and manipulation. The prior work
generally pursues high success rates with shortest paths while largely ignoring
the problems caused by collision during interaction. This lack of
prioritization is understandable: in simulated environments there is no
inherent cost to breaking virtual objects. As a result, well-trained agents
frequently have catastrophic collision with objects despite final success. In
the robotics community, where the cost of collision is large, collision
avoidance is a long-standing and crucial topic to ensure that robots can be
safely deployed in the real world. In this work, we take the first step towards
collision/disturbance-free embodied AI agents for visual mobile manipulation,
facilitating safe deployment in real robots. We develop a new
disturbance-avoidance methodology at the heart of which is the auxiliary task
of disturbance prediction. When combined with a disturbance penalty, our
auxiliary task greatly enhances sample efficiency and final performance by
knowledge distillation of disturbance into the agent. Our experiments on
ManipulaTHOR show that, on testing scenes with novel objects, our method
improves the success rate from 61.7% to 85.6% and the success rate without
disturbance from 29.8% to 50.2% over the original baseline. Extensive ablation
studies show the value of our pipelined approach. Project site is at
https://sites.google.com/view/disturb-free
Related papers
- ARMOR: Egocentric Perception for Humanoid Robot Collision Avoidance and Motion Planning [10.207814069339735]
ARMOR is a novel egocentric perception system for humanoid robots.
Our distributed perception approach enhances the robot's spatial awareness.
We show that our ARMOR perception is superior against a setup with multiple dense head-mounted, and externally mounted depth cameras.
arXiv Detail & Related papers (2024-11-30T08:39:23Z) - HEIGHT: Heterogeneous Interaction Graph Transformer for Robot Navigation in Crowded and Constrained Environments [8.974071308749007]
We study the problem of robot navigation in dense and interactive crowds with environmental constraints such as corridors and furniture.
Previous methods fail to consider all types of interactions among agents and obstacles, leading to unsafe and inefficient robot paths.
We propose a structured framework to learn robot navigation policies with reinforcement learning.
arXiv Detail & Related papers (2024-11-19T00:56:35Z) - Robot Navigation with Entity-Based Collision Avoidance using Deep Reinforcement Learning [0.0]
We present a novel methodology that enhances the robot's interaction with different types of agents and obstacles.
This approach uses information about the entity types, improving collision avoidance and ensuring safer navigation.
We introduce a new reward function that penalizes the robot for collisions with different entities such as adults, bicyclists, children, and static obstacles.
arXiv Detail & Related papers (2024-08-26T11:16:03Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Motion Prediction with Gaussian Processes for Safe Human-Robot Interaction in Virtual Environments [1.677718351174347]
Collaborative robots must be safe to operate alongside humans to minimize the risk of accidental collisions.
This research aims to improve the efficiency of a collaborative robot while improving the safety of the human user.
arXiv Detail & Related papers (2024-05-15T05:51:41Z) - COBRA-PPM: A Causal Bayesian Reasoning Architecture Using Probabilistic Programming for Robot Manipulation Under Uncertainty [4.087774077861305]
We introduce COBRA-PPM, a novel causal Bayesian reasoning architecture that combines causal Bayesian networks and probabilistic programming to perform interventional inference for robot manipulation under uncertainty.<n>We demonstrate its capabilities through high-fidelity experiments on an exemplar block stacking task, where it predicts manipulation outcomes with high accuracy (Pred Acc: 88.6%) and performs greedy next-best action selection with a 94.2% task success rate.
arXiv Detail & Related papers (2024-03-21T15:36:26Z) - Learning Vision-based Pursuit-Evasion Robot Policies [54.52536214251999]
We develop a fully-observable robot policy that generates supervision for a partially-observable one.
We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild.
arXiv Detail & Related papers (2023-08-30T17:59:05Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - Passing Through Narrow Gaps with Deep Reinforcement Learning [2.299414848492227]
In this paper we present a deep reinforcement learning method for autonomously navigating through small gaps.
We first learn a gap behaviour policy to get through small gaps, where contact between the robot and the gap may be required.
In simulation experiments, our approach achieves 93% success rate when the gap behaviour is activated manually by an operator.
In real robot experiments, our approach achieves a success rate of 73% with manual activation, and 40% with autonomous behaviour selection.
arXiv Detail & Related papers (2021-03-06T00:10:41Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.