Learning from synthetic data generated with GRADE
- URL: http://arxiv.org/abs/2305.04282v2
- Date: Fri, 26 May 2023 09:26:06 GMT
- Title: Learning from synthetic data generated with GRADE
- Authors: Elia Bonetto and Chenghao Xu and Aamir Ahmad
- Abstract summary: We present a framework for generating realistic animated dynamic environments (GRADE) for robotics research.
GRADE supports full simulation control, ROS integration, realistic physics, while being in an engine that produces high visual fidelity images and ground truth data.
We show that, even training using only synthetic data, can generalize well to real-world images in the same application domain.
- Score: 0.6982738885923204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, synthetic data generation and realistic rendering has advanced
tasks like target tracking and human pose estimation. Simulations for most
robotics applications are obtained in (semi)static environments, with specific
sensors and low visual fidelity. To solve this, we present a fully customizable
framework for generating realistic animated dynamic environments (GRADE) for
robotics research, first introduced in [1]. GRADE supports full simulation
control, ROS integration, realistic physics, while being in an engine that
produces high visual fidelity images and ground truth data. We use GRADE to
generate a dataset focused on indoor dynamic scenes with people and flying
objects. Using this, we evaluate the performance of YOLO and Mask R-CNN on the
tasks of segmenting and detecting people. Our results provide evidence that
using data generated with GRADE can improve the model performance when used for
a pre-training step. We also show that, even training using only synthetic
data, can generalize well to real-world images in the same application domain
such as the ones from the TUM-RGBD dataset. The code, results, trained models,
and the generated data are provided as open-source at
https://eliabntt.github.io/grade-rr.
Related papers
- Close the Sim2real Gap via Physically-based Structured Light Synthetic Data Simulation [16.69742672616517]
We introduce an innovative structured light simulation system, generating both RGB and physically realistic depth images.
We create an RGBD dataset tailored for robotic industrial grasping scenarios.
By reducing the sim2real gap and enhancing deep learning training, we facilitate the application of deep learning models in industrial settings.
arXiv Detail & Related papers (2024-07-17T09:57:14Z) - VR-based generation of photorealistic synthetic data for training
hand-object tracking models [0.0]
"blender-hoisynth" is an interactive synthetic data generator based on the Blender software.
It is possible for users to interact with objects via virtual hands using standard Virtual Reality hardware.
We replace large parts of the training data in the well-known DexYCB dataset with hoisynth data and train a state-of-the-art HOI reconstruction model with it.
arXiv Detail & Related papers (2024-01-31T14:32:56Z) - Learning Interactive Real-World Simulators [96.5991333400566]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - Development of a Realistic Crowd Simulation Environment for Fine-grained
Validation of People Tracking Methods [0.7223361655030193]
This work develops an extension of crowd simulation (named CrowdSim2) and prove its usability in the application of people-tracking algorithms.
The simulator is developed using the very popular Unity 3D engine with particular emphasis on the aspects of realism in the environment.
Three methods of tracking were used to validate generated dataset: IOU-Tracker, Deep-Sort, and Deep-TAMA.
arXiv Detail & Related papers (2023-04-26T09:29:58Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Hands-Up: Leveraging Synthetic Data for Hands-On-Wheel Detection [0.38233569758620045]
This work demonstrates the use of synthetic photo-realistic in-cabin data to train a Driver Monitoring System.
We show how performing error analysis and generating the missing edge-cases in our platform boosts performance.
This showcases the ability of human-centric synthetic data to generalize well to the real world.
arXiv Detail & Related papers (2022-05-31T23:34:12Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - UnrealROX+: An Improved Tool for Acquiring Synthetic Data from Virtual
3D Environments [14.453602631430508]
We present an improved version of UnrealROX, a tool to generate synthetic data from robotic images.
Un UnrealROX+ includes new features such as generating albedo or a Python API for interacting with the virtual environment from Deep Learning frameworks.
arXiv Detail & Related papers (2021-04-23T18:45:42Z) - Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data
Generation [88.04759848307687]
In Meta-Sim2, we aim to learn the scene structure in addition to parameters, which is a challenging problem due to its discrete nature.
We use Reinforcement Learning to train our model, and design a feature space divergence between our synthesized and target images that is key to successful training.
We also show that this leads to downstream improvement in the performance of an object detector trained on our generated dataset as opposed to other baseline simulation methods.
arXiv Detail & Related papers (2020-08-20T17:28:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.