SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic
Data via Stereo
- URL: http://arxiv.org/abs/2106.16118v1
- Date: Wed, 30 Jun 2021 15:18:14 GMT
- Title: SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic
Data via Stereo
- Authors: Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan, Mark
Tjersland
- Abstract summary: SimNet is trained as a single multi-headed neural network using simulated stereo data.
SimNet is evaluated on 2D car detection, unknown object detection, and deformable object keypoint detection.
By inferring grasp positions using the OBB and keypoint predictions, SimNet can be used to perform end-to-end manipulation of unknown objects.
- Score: 4.317104502755003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot manipulation of unknown objects in unstructured environments is a
challenging problem due to the variety of shapes, materials, arrangements and
lighting conditions. Even with large-scale real-world data collection, robust
perception and manipulation of transparent and reflective objects across
various lighting conditions remain challenging. To address these challenges we
propose an approach to performing sim-to-real transfer of robotic perception.
The underlying model, SimNet, is trained as a single multi-headed neural
network using simulated stereo data as input and simulated object segmentation
masks, 3D oriented bounding boxes (OBBs), object keypoints, and disparity as
output. A key component of SimNet is the incorporation of a learned stereo
sub-network that predicts disparity. SimNet is evaluated on 2D car detection,
unknown object detection, and deformable object keypoint detection and
significantly outperforms a baseline that uses a structured light RGB-D sensor.
By inferring grasp positions using the OBB and keypoint predictions, SimNet can
be used to perform end-to-end manipulation of unknown objects in both easy and
hard scenarios using our fleet of Toyota HSR robots in four home environments.
In unknown object grasping experiments, the predictions from the baseline RGB-D
network and SimNet enable successful grasps of most of the easy objects.
However, the RGB-D baseline only grasps 35% of the hard (e.g., transparent)
objects, while SimNet grasps 95%, suggesting that SimNet can enable robust
manipulation of unknown objects, including transparent objects, in unknown
environments.
Related papers
- ClearDepth: Enhanced Stereo Perception of Transparent Objects for Robotic Manipulation [18.140839442955485]
We develop a vision transformer-based algorithm for stereo depth recovery of transparent objects.
Our method incorporates a parameter-aligned, domain-adaptive, and physically realistic Sim2Real simulation for efficient data generation.
Our experimental results demonstrate the model's exceptional Sim2Real generalizability in real-world scenarios.
arXiv Detail & Related papers (2024-09-13T15:44:38Z) - Close the Sim2real Gap via Physically-based Structured Light Synthetic Data Simulation [16.69742672616517]
We introduce an innovative structured light simulation system, generating both RGB and physically realistic depth images.
We create an RGBD dataset tailored for robotic industrial grasping scenarios.
By reducing the sim2real gap and enhancing deep learning training, we facilitate the application of deep learning models in industrial settings.
arXiv Detail & Related papers (2024-07-17T09:57:14Z) - ASGrasp: Generalizable Transparent Object Reconstruction and Grasping from RGB-D Active Stereo Camera [9.212504138203222]
We propose ASGrasp, a 6-DoF grasp detection network that uses an RGB-D active stereo camera.
Our system distinguishes itself by its ability to directly utilize raw IR and RGB images for transparent object geometry reconstruction.
Our experiments demonstrate that ASGrasp can achieve over 90% success rate for generalizable transparent object grasping.
arXiv Detail & Related papers (2024-05-09T09:44:51Z) - Closing the Visual Sim-to-Real Gap with Object-Composable NeRFs [59.12526668734703]
We introduce Composable Object Volume NeRF (COV-NeRF), an object-composable NeRF model that is the centerpiece of a real-to-sim pipeline.
COV-NeRF extracts objects from real images and composes them into new scenes, generating photorealistic renderings and many types of 2D and 3D supervision.
arXiv Detail & Related papers (2024-03-07T00:00:02Z) - Learning Sim-to-Real Dense Object Descriptors for Robotic Manipulation [4.7246285569677315]
We present Sim-to-Real Dense Object Nets (SRDONs), a dense object descriptor that not only understands the object via appropriate representation but also maps simulated and real data to a unified feature space with pixel consistency.
We demonstrate in experiments that pre-trained SRDONs significantly improve performances on unseen objects and unseen visual environments for various robotic tasks with zero real-world training.
arXiv Detail & Related papers (2023-04-18T02:28:55Z) - Towards Multimodal Multitask Scene Understanding Models for Indoor
Mobile Agents [49.904531485843464]
In this paper, we discuss the main challenge: insufficient, or even no, labeled data for real-world indoor environments.
We describe MMISM (Multi-modality input Multi-task output Indoor Scene understanding Model) to tackle the above challenges.
MMISM considers RGB images as well as sparse Lidar points as inputs and 3D object detection, depth completion, human pose estimation, and semantic segmentation as output tasks.
We show that MMISM performs on par or even better than single-task models.
arXiv Detail & Related papers (2022-09-27T04:49:19Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.