Deep 6-DoF Tracking of Unknown Objects for Reactive Grasping
- URL: http://arxiv.org/abs/2103.05401v2
- Date: Wed, 10 Mar 2021 05:23:11 GMT
- Title: Deep 6-DoF Tracking of Unknown Objects for Reactive Grasping
- Authors: Marc Tuscher, Julian H\"orz, Danny Driess, Marc Toussaint
- Abstract summary: Practical applications occur in many real-world settings where robots need to interact with an unknown environment.
We tackle the problem of reactive grasping by proposing a method for unknown object tracking, grasp point sampling and dynamic trajectory planning.
We propose a robotic manipulation system, which is able to grasp a wide variety of formerly unseen objects and is robust against object perturbations and inferior grasping points.
- Score: 19.43152908750153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic manipulation of unknown objects is an important field of research.
Practical applications occur in many real-world settings where robots need to
interact with an unknown environment. We tackle the problem of reactive
grasping by proposing a method for unknown object tracking, grasp point
sampling and dynamic trajectory planning. Our object tracking method combines
Siamese Networks with an Iterative Closest Point approach for pointcloud
registration into a method for 6-DoF unknown object tracking. The method does
not require further training and is robust to noise and occlusion. We propose a
robotic manipulation system, which is able to grasp a wide variety of formerly
unseen objects and is robust against object perturbations and inferior grasping
points.
Related papers
- Language-Driven 6-DoF Grasp Detection Using Negative Prompt Guidance [13.246380364455494]
We present a new approach for language-driven 6-DoF grasp detection in cluttered point clouds.
The proposed negative prompt strategy directs the detection process toward the desired object while steering away from unwanted ones.
Our method enables an end-to-end framework where humans can command the robot to grasp desired objects in a cluttered scene using natural language.
arXiv Detail & Related papers (2024-07-18T18:24:51Z) - SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving [160.57870373052577]
We propose a saliency-based OD algorithm (SalienDet) to detect unknown objects.
Our SalienDet utilizes a saliency-based algorithm to enhance image features for object proposal generation.
We design a dataset relabeling approach to differentiate the unknown objects from all objects in training sample set to achieve Open-World Detection.
arXiv Detail & Related papers (2023-05-11T16:19:44Z) - Open-Set Object Detection Using Classification-free Object Proposal and
Instance-level Contrastive Learning [25.935629339091697]
Open-set object detection (OSOD) is a promising direction to handle the problem consisting of two subtasks: objects and background separation, and open-set object classification.
We present Openset RCNN to address the challenging OSOD.
We show that our Openset RCNN can endow the robot with an open-set perception ability to support robotic rearrangement tasks in cluttered environments.
arXiv Detail & Related papers (2022-11-21T15:00:04Z) - SafePicking: Learning Safe Object Extraction via Object-Level Mapping [19.502587411252946]
We present a system, SafePicking, that integrates object-level mapping and learning-based motion planning.
Planning is done by learning a deep Q-network that receives observations of predicted poses and a depth-based heightmap to output a motion trajectory.
Our results show that the observation fusion of poses and depth-sensing gives both better performance and robustness to the model.
arXiv Detail & Related papers (2022-02-11T18:55:10Z) - INVIGORATE: Interactive Visual Grounding and Grasping in Clutter [56.00554240240515]
INVIGORATE is a robot system that interacts with human through natural language and grasps a specified object in clutter.
We train separate neural networks for object detection, for visual grounding, for question generation, and for OBR detection and grasping.
We build a partially observable Markov decision process (POMDP) that integrates the learned neural network modules.
arXiv Detail & Related papers (2021-08-25T07:35:21Z) - Learning to Track with Object Permanence [61.36492084090744]
We introduce an end-to-end trainable approach for joint object detection and tracking.
Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI, and MOT17 datasets.
arXiv Detail & Related papers (2021-03-26T04:43:04Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Multi-Agent Active Search using Realistic Depth-Aware Noise Model [8.520962086877548]
Active search for objects of interest in an unknown environment has many robotics applications including search and rescue, detecting gas leaks or locating animal poachers.
Existing algorithms often prioritize the location accuracy of objects of interest while other practical issues such as the reliability of object detection as a function of distance and lines of sight remain largely ignored.
We present an algorithm called Noise-Aware Thompson Sampling (NATS) that addresses these issues for multiple ground-based robots performing active search considering two sources of sensory information from monocular optical imagery and depth maps.
arXiv Detail & Related papers (2020-11-09T23:20:55Z) - Occlusion-Aware Search for Object Retrieval in Clutter [4.693170687870612]
We address the manipulation task of retrieving a target object from a cluttered shelf.
When the target object is hidden, the robot must search through the clutter for retrieving it.
We present a data-driven hybrid planner for generating occlusion-aware actions in closed-loop.
arXiv Detail & Related papers (2020-11-06T13:15:27Z) - Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds [62.013872787987054]
We propose a new method for learning closed-loop control policies for 6D grasping.
Our policy takes a segmented point cloud of an object from an egocentric camera as input, and outputs continuous 6D control actions of the robot gripper for grasping the object.
arXiv Detail & Related papers (2020-10-02T07:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.