UnrealROX+: An Improved Tool for Acquiring Synthetic Data from Virtual
3D Environments
- URL: http://arxiv.org/abs/2104.11776v1
- Date: Fri, 23 Apr 2021 18:45:42 GMT
- Title: UnrealROX+: An Improved Tool for Acquiring Synthetic Data from Virtual
3D Environments
- Authors: Pablo Martinez-Gonzalez, Sergiu Oprea, John Alejandro Castro-Vargas,
Alberto Garcia-Garcia, Sergio Orts-Escolano, Jose Garcia-Rodriguez and Markus
Vincze
- Abstract summary: We present an improved version of UnrealROX, a tool to generate synthetic data from robotic images.
Un UnrealROX+ includes new features such as generating albedo or a Python API for interacting with the virtual environment from Deep Learning frameworks.
- Score: 14.453602631430508
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Synthetic data generation has become essential in last years for feeding
data-driven algorithms, which surpassed traditional techniques performance in
almost every computer vision problem. Gathering and labelling the amount of
data needed for these data-hungry models in the real world may become
unfeasible and error-prone, while synthetic data give us the possibility of
generating huge amounts of data with pixel-perfect annotations. However, most
synthetic datasets lack from enough realism in their rendered images. In that
context UnrealROX generation tool was presented in 2019, allowing to generate
highly realistic data, at high resolutions and framerates, with an efficient
pipeline based on Unreal Engine, a cutting-edge videogame engine. UnrealROX
enabled robotic vision researchers to generate realistic and visually plausible
data with full ground truth for a wide variety of problems such as class and
instance semantic segmentation, object detection, depth estimation, visual
grasping, and navigation. Nevertheless, its workflow was very tied to generate
image sequences from a robotic on-board camera, making hard to generate data
for other purposes. In this work, we present UnrealROX+, an improved version of
UnrealROX where its decoupled and easy-to-use data acquisition system allows to
quickly design and generate data in a much more flexible and customizable way.
Moreover, it is packaged as an Unreal plug-in, which makes it more comfortable
to use with already existing Unreal projects, and it also includes new features
such as generating albedo or a Python API for interacting with the virtual
environment from Deep Learning frameworks.
Related papers
- Synthetica: Large Scale Synthetic Data for Robot Perception [21.415878105900187]
We present Synthetica, a method for large-scale synthetic data generation for training robust state estimators.
This paper focuses on the task of object detection, an important problem which can serve as the front-end for most state estimation problems.
We leverage data from a ray-tracing, generating 2.7 million images, to train highly accurate real-time detection transformers.
We demonstrate state-of-the-art performance on the task of object detection while having detectors that run at 50-100Hz which is 9 times faster than the prior SOTA.
arXiv Detail & Related papers (2024-10-28T15:50:56Z) - VR-based generation of photorealistic synthetic data for training
hand-object tracking models [0.0]
"blender-hoisynth" is an interactive synthetic data generator based on the Blender software.
It is possible for users to interact with objects via virtual hands using standard Virtual Reality hardware.
We replace large parts of the training data in the well-known DexYCB dataset with hoisynth data and train a state-of-the-art HOI reconstruction model with it.
arXiv Detail & Related papers (2024-01-31T14:32:56Z) - View-Dependent Octree-based Mesh Extraction in Unbounded Scenes for
Procedural Synthetic Data [71.22495169640239]
Procedural signed distance functions (SDFs) are a powerful tool for modeling large-scale detailed scenes.
We propose OcMesher, a mesh extraction algorithm that efficiently handles high-detail unbounded scenes with perfect view-consistency.
arXiv Detail & Related papers (2023-12-13T18:56:13Z) - MuSHRoom: Multi-Sensor Hybrid Room Dataset for Joint 3D Reconstruction and Novel View Synthesis [26.710960922302124]
We propose a real-world Multi-Sensor Hybrid Room dataset (MuSHRoom)
Our dataset presents exciting challenges and requires state-of-the-art methods to be cost-effective, robust to noisy data and devices.
We benchmark several famous pipelines on our dataset for joint 3D mesh reconstruction and novel view synthesis.
arXiv Detail & Related papers (2023-11-05T21:46:12Z) - Learning from synthetic data generated with GRADE [0.6982738885923204]
We present a framework for generating realistic animated dynamic environments (GRADE) for robotics research.
GRADE supports full simulation control, ROS integration, realistic physics, while being in an engine that produces high visual fidelity images and ground truth data.
We show that, even training using only synthetic data, can generalize well to real-world images in the same application domain.
arXiv Detail & Related papers (2023-05-07T14:13:04Z) - Towards Real-World Video Deblurring by Exploring Blur Formation Process [53.91239555063343]
In recent years, deep learning-based approaches have achieved promising success on video deblurring task.
The models trained on existing synthetic datasets still suffer from generalization problems over real-world blurry scenarios.
We propose a novel realistic blur synthesis pipeline termed RAW-Blur by leveraging blur formation cues.
arXiv Detail & Related papers (2022-08-28T09:24:52Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - Hands-Up: Leveraging Synthetic Data for Hands-On-Wheel Detection [0.38233569758620045]
This work demonstrates the use of synthetic photo-realistic in-cabin data to train a Driver Monitoring System.
We show how performing error analysis and generating the missing edge-cases in our platform boosts performance.
This showcases the ability of human-centric synthetic data to generalize well to the real world.
arXiv Detail & Related papers (2022-05-31T23:34:12Z) - Kubric: A scalable dataset generator [73.78485189435729]
Kubric is a Python framework that interfaces with PyBullet and Blender to generate photo-realistic scenes, with rich annotations, and seamlessly scales to large jobs distributed over thousands of machines.
We demonstrate the effectiveness of Kubric by presenting a series of 13 different generated datasets for tasks ranging from studying 3D NeRF models to optical flow estimation.
arXiv Detail & Related papers (2022-03-07T18:13:59Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene
Datasets [103.54691385842314]
We propose a novel framework for creating large-scale photorealistic datasets of indoor scenes.
Our goal is to make the dataset creation process widely accessible.
This enables important applications in inverse rendering, scene understanding and robotics.
arXiv Detail & Related papers (2020-07-25T06:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.