IGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday
Household Tasks
- URL: http://arxiv.org/abs/2108.03272v2
- Date: Tue, 10 Aug 2021 04:06:57 GMT
- Title: IGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday
Household Tasks
- Authors: Chengshu Li, Fei Xia, Roberto Mart\'in-Mart\'in, Michael Lingelbach,
Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish
Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei,
Silvio Savarese
- Abstract summary: We present iGibson 2.0, a simulation environment that supports the simulation of a more diverse set of household tasks.
First, iGibson 2.0 supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states.
Second, iGibson 2.0 implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked.
Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.
- Score: 60.930678878024366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research in embodied AI has been boosted by the use of simulation
environments to develop and train robot learning approaches. However, the use
of simulation has skewed the attention to tasks that only require what robotics
simulators can simulate: motion and physical contact. We present iGibson 2.0,
an open-source simulation environment that supports the simulation of a more
diverse set of household tasks through three key innovations. First, iGibson
2.0 supports object states, including temperature, wetness level, cleanliness
level, and toggled and sliced states, necessary to cover a wider range of
tasks. Second, iGibson 2.0 implements a set of predicate logic functions that
map the simulator states to logic states like Cooked or Soaked. Additionally,
given a logic state, iGibson 2.0 can sample valid physical states that satisfy
it. This functionality can generate potentially infinite instances of tasks
with minimal effort from the users. The sampling mechanism allows our scenes to
be more densely populated with small objects in semantically meaningful
locations. Third, iGibson 2.0 includes a virtual reality (VR) interface to
immerse humans in its scenes to collect demonstrations. As a result, we can
collect demonstrations from humans on these new types of tasks, and use them
for imitation learning. We evaluate the new capabilities of iGibson 2.0 to
enable robot learning of novel tasks, in the hope of demonstrating the
potential of this new simulator to support new research in embodied AI. iGibson
2.0 and its new dataset will be publicly available at
http://svl.stanford.edu/igibson/.
Related papers
- GRUtopia: Dream General Robots in a City at Scale [65.08318324604116]
This paper introduces project GRUtopia, the first simulated interactive 3D society designed for various robots.
GRScenes includes 100k interactive, finely annotated scenes, which can be freely combined into city-scale environments.
GRResidents is a Large Language Model (LLM) driven Non-Player Character (NPC) system that is responsible for social interaction.
arXiv Detail & Related papers (2024-07-15T17:40:46Z) - BEHAVIOR-1K: A Human-Centered, Embodied AI Benchmark with 1,000 Everyday Activities and Realistic Simulation [63.42591251500825]
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered robotics.
The first is the definition of 1,000 everyday activities grounded in 50 scenes with more than 9,000 objects annotated with rich physical and semantic properties.
The second is OMNIGIBSON, a novel simulation environment that supports these activities via realistic physics simulation and rendering of rigid bodies, deformable bodies, and liquids.
arXiv Detail & Related papers (2024-03-14T09:48:36Z) - GenH2R: Learning Generalizable Human-to-Robot Handover via Scalable Simulation, Demonstration, and Imitation [31.702907860448477]
GenH2R is a framework for learning generalizable vision-based human-to-robot (H2R) handover skills.
We acquire such generalizability by learning H2R handover at scale with a comprehensive solution.
We leverage large-scale 3D model repositories, dexterous grasp generation methods, and curve-based 3D animation.
arXiv Detail & Related papers (2024-01-01T18:20:43Z) - Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models [17.757495961816783]
Gen2Sim is a method for scaling up robot skill learning in simulation by automating generation of 3D assets, task descriptions, task decompositions and reward functions.
Our work contributes hundreds of simulated assets, tasks and demonstrations, taking a step towards fully autonomous robotic manipulation skill acquisition in simulation.
arXiv Detail & Related papers (2023-10-27T17:55:32Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents [18.869243389210492]
URoboSim is a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
arXiv Detail & Related papers (2020-12-08T14:23:24Z) - iGibson, a Simulation Environment for Interactive Tasks in Large
Realistic Scenes [54.04456391489063]
iGibson is a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.
Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects.
iGibson features enable the generalization of navigation agents, and that the human-iGibson interface and integrated motion planners facilitate efficient imitation learning of simple human demonstrated behaviors.
arXiv Detail & Related papers (2020-12-05T02:14:17Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.