VRKitchen2.0-IndoorKit: A Tutorial for Augmented Indoor Scene Building
in Omniverse
- URL: http://arxiv.org/abs/2206.11887v1
- Date: Thu, 23 Jun 2022 17:53:33 GMT
- Title: VRKitchen2.0-IndoorKit: A Tutorial for Augmented Indoor Scene Building
in Omniverse
- Authors: Yizhou Zhao, Steven Gong, Xiaofeng Gao, Wensi Ai, Song-Chun Zhu
- Abstract summary: INDOORKIT is a built-in toolkit for NVIDIA OMNIVERSE.
It provides flexible pipelines for indoor scene building, scene randomizing, and animation controls.
- Score: 77.52012928882928
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the recent progress of simulations by 3D modeling software and game
engines, many researchers have focused on Embodied AI tasks in the virtual
environment. However, the research community lacks a platform that can easily
serve both indoor scene synthesis and model benchmarking with various
algorithms. Meanwhile, computer graphics-related tasks need a toolkit for
implementing advanced synthesizing techniques. To facilitate the study of
indoor scene building methods and their potential robotics applications, we
introduce INDOORKIT: a built-in toolkit for NVIDIA OMNIVERSE that provides
flexible pipelines for indoor scene building, scene randomizing, and animation
controls. Besides, combining Python coding in the animation software INDOORKIT
assists researchers in creating real-time training and controlling avatars and
robotics. The source code for this toolkit is available at
https://github.com/realvcla/VRKitchen2.0-Tutorial, and the tutorial along with
the toolkit is available at https://vrkitchen20-tutorial.readthedocs.io/en/
Related papers
- ManiSkill3: GPU Parallelized Robotics Simulation and Rendering for Generalizable Embodied AI [27.00155119759743]
ManiSkill3 is the fastest state-visual GPU parallelized robotics simulator with contact-rich physics targeting generalizable manipulation.
ManiSkill3 supports GPU parallelization of many aspects including simulation+rendering, heterogeneous simulation, pointclouds/voxels visual input, and more.
arXiv Detail & Related papers (2024-10-01T06:10:39Z) - Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models [63.89598561397856]
We present a system for quadrupedal mobile manipulation in indoor environments.
It uses a front-mounted gripper for object manipulation, a low-level controller trained in simulation using egocentric depth for agile skills.
We evaluate our system in two unseen environments without any real-world data collection or training.
arXiv Detail & Related papers (2024-09-30T20:58:38Z) - RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots [25.650235551519952]
We present RoboCasa, a large-scale simulation framework for training generalist robots in everyday environments.
We provide thousands of 3D assets across over 150 object categories and dozens of interactable furniture and appliances.
Our experiments show a clear scaling trend in using synthetically generated robot data for large-scale imitation learning.
arXiv Detail & Related papers (2024-06-04T17:41:31Z) - Spot-Compose: A Framework for Open-Vocabulary Object Retrieval and Drawer Manipulation in Point Clouds [45.87961177297602]
This work aims to integrate recent methods into a comprehensive framework for robotic interaction and manipulation in human-centric environments.
Specifically, we leverage 3D reconstructions from a commodity 3D scanner for open-vocabulary instance segmentation.
We show the performance and robustness of our model in two sets of real-world experiments including dynamic object retrieval and drawer opening.
arXiv Detail & Related papers (2024-04-18T18:01:15Z) - Unsupervised Volumetric Animation [54.52012366520807]
We propose a novel approach for unsupervised 3D animation of non-rigid deformable objects.
Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos.
We show our model can obtain animatable 3D objects from a single volume or few images.
arXiv Detail & Related papers (2023-01-26T18:58:54Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - IGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday
Household Tasks [60.930678878024366]
We present iGibson 2.0, a simulation environment that supports the simulation of a more diverse set of household tasks.
First, iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states.
Second, iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.
arXiv Detail & Related papers (2021-08-06T18:41:39Z) - Out of the Box: Embodied Navigation in the Real World [45.97756658635314]
We show how to transfer knowledge acquired in simulation into the real world.
We deploy our models on a LoCoBot equipped with a single Intel RealSense camera.
Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world.
arXiv Detail & Related papers (2021-05-12T18:00:14Z) - myGym: Modular Toolkit for Visuomotor Robotic Tasks [0.0]
myGym is a novel virtual robotic toolkit developed for reinforcement learning (RL), intrinsic motivation and imitation learning tasks trained in a 3D simulator.
The modular structure of the simulator enables users to train and validate their algorithms on a large number of scenarios with various robots, environments and tasks.
The toolkit provides pretrained visual modules for visuomotor tasks allowing rapid prototyping, and, moreover, users can customize the visual submodules and retrain with their own set of objects.
arXiv Detail & Related papers (2020-12-21T19:15:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.