Collision-Aware Target-Driven Object Grasping in Constrained
Environments
- URL: http://arxiv.org/abs/2104.00776v1
- Date: Thu, 1 Apr 2021 21:44:07 GMT
- Title: Collision-Aware Target-Driven Object Grasping in Constrained
Environments
- Authors: Xibai Lou, Yang Yang and Changhyun Choi
- Abstract summary: We propose a novel Collision-Aware Reachability Predictor (CARP) for 6-DoF grasping systems.
The CARP learns to estimate the collision-free probabilities for grasp poses and significantly improves grasping in challenging environments.
The experiments in both simulation and the real world show that our approach achieves more than 75% grasping rate on novel objects.
- Score: 10.934615956723672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grasping a novel target object in constrained environments (e.g., walls,
bins, and shelves) requires intensive reasoning about grasp pose reachability
to avoid collisions with the surrounding structures. Typical 6-DoF robotic
grasping systems rely on the prior knowledge about the environment and
intensive planning computation, which is ungeneralizable and inefficient. In
contrast, we propose a novel Collision-Aware Reachability Predictor (CARP) for
6-DoF grasping systems. The CARP learns to estimate the collision-free
probabilities for grasp poses and significantly improves grasping in
challenging environments. The deep neural networks in our approach are trained
fully by self-supervision in simulation. The experiments in both simulation and
the real world show that our approach achieves more than 75% grasping rate on
novel objects in various surrounding structures. The ablation study
demonstrates the effectiveness of the CARP, which improves the 6-DoF grasping
rate by 95.7%.
Related papers
- Potential Field as Scene Affordance for Behavior Change-Based Visual Risk Object Identification [4.896236083290351]
We study behavior change-based visual risk object identification (Visual-ROI)
Existing methods often show significant limitations in spatial accuracy and temporal consistency.
We propose a new framework with a Bird's Eye View representation to overcome these challenges.
arXiv Detail & Related papers (2024-09-24T08:17:50Z) - Safe and Efficient Path Planning under Uncertainty via Deep Collision Probability Fields [21.741354016294476]
Estimating collision probabilities is crucial to ensure safety during path planning.
Deep Collision Probability Fields is a neural-based approach for computing collision probabilities of arbitrary objects.
Our approach relegates the computationally intensive estimation of collision probabilities via sampling at the training step.
arXiv Detail & Related papers (2024-09-06T14:28:41Z) - Latent Exploration for Reinforcement Learning [87.42776741119653]
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment.
We propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network.
arXiv Detail & Related papers (2023-05-31T17:40:43Z) - A Contextual Bandit Approach for Learning to Plan in Environments with
Probabilistic Goal Configurations [20.15854546504947]
We propose a modular framework for object-nav that is able to efficiently search indoor environments for not just static objects but also movable objects.
Our contextual-bandit agent efficiently explores the environment by showing optimism in the face of uncertainty.
We evaluate our algorithms in two simulated environments and a real-world setting, to demonstrate high sample efficiency and reliability.
arXiv Detail & Related papers (2022-11-29T15:48:54Z) - COPILOT: Human-Environment Collision Prediction and Localization from
Egocentric Videos [62.34712951567793]
The ability to forecast human-environment collisions from egocentric observations is vital to enable collision avoidance in applications such as VR, AR, and wearable assistive robotics.
We introduce the challenging problem of predicting collisions in diverse environments from multi-view egocentric videos captured from body-mounted cameras.
We propose a transformer-based model called COPILOT to perform collision prediction and localization simultaneously.
arXiv Detail & Related papers (2022-10-04T17:49:23Z) - Physical Attack on Monocular Depth Estimation with Optimal Adversarial
Patches [18.58673451901394]
We develop an attack against learning-based Monocular Depth Estimation (MDE)
We balance the stealth and effectiveness of our attack with object-oriented adversarial design, sensitive region localization, and natural style camouflage.
Experimental results show that our method can generate stealthy, effective, and robust adversarial patches for different target objects and models.
arXiv Detail & Related papers (2022-07-11T08:59:09Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Congestion-aware Multi-agent Trajectory Prediction for Collision
Avoidance [110.63037190641414]
We propose to learn congestion patterns explicitly and devise a novel "Sense--Learn--Reason--Predict" framework.
By decomposing the learning phases into two stages, a "student" can learn contextual cues from a "teacher" while generating collision-free trajectories.
In experiments, we demonstrate that the proposed model is able to generate collision-free trajectory predictions in a synthetic dataset.
arXiv Detail & Related papers (2021-03-26T02:42:33Z) - Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes [50.303361537562715]
We propose an end-to-end network that efficiently generates a distribution of 6-DoF parallel-jaw grasps.
By rooting the full 6-DoF grasp pose and width in the observed point cloud, we can reduce the dimensionality of our grasp representation to 4-DoF.
In a robotic grasping study of unseen objects in structured clutter we achieve over 90% success rate, cutting the failure rate in half compared to a recent state-of-the-art method.
arXiv Detail & Related papers (2021-03-25T20:33:29Z) - Object Rearrangement Using Learned Implicit Collision Functions [61.90305371998561]
We propose a learned collision model that accepts scene and query object point clouds and predicts collisions for 6DOF object poses within the scene.
We leverage the learned collision model as part of a model predictive path integral (MPPI) policy in a tabletop rearrangement task.
The learned model outperforms both traditional pipelines and learned ablations by 9.8% in accuracy on a dataset of simulated collision queries.
arXiv Detail & Related papers (2020-11-21T05:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.