SAPIEN: A SimulAted Part-based Interactive ENvironment
- URL: http://arxiv.org/abs/2003.08515v1
- Date: Thu, 19 Mar 2020 00:11:34 GMT
- Title: SAPIEN: A SimulAted Part-based Interactive ENvironment
- Authors: Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu,
Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang,
Leonidas J. Guibas, Hao Su
- Abstract summary: SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
- Score: 77.4739790629284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building home assistant robots has long been a pursuit for vision and
robotics researchers. To achieve this task, a simulated environment with
physically realistic simulation, sufficient articulated objects, and
transferability to the real robot is indispensable. Existing environments
achieve these requirements for robotics simulation with different levels of
simplification and focus. We take one step further in constructing an
environment that supports household tasks for training robot learning
algorithm. Our work, SAPIEN, is a realistic and physics-rich simulated
environment that hosts a large-scale set for articulated objects. Our SAPIEN
enables various robotic vision and interaction tasks that require detailed
part-level understanding.We evaluate state-of-the-art vision algorithms for
part detection and motion attribute recognition as well as demonstrate robotic
interaction tasks using heuristic approaches and reinforcement learning
algorithms. We hope that our SAPIEN can open a lot of research directions yet
to be explored, including learning cognition through interaction, part motion
discovery, and construction of robotics-ready simulated game environment.
Related papers
- Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Learning Hierarchical Interactive Multi-Object Search for Mobile
Manipulation [10.21450780640562]
We introduce a novel interactive multi-object search task in which a robot has to open doors to navigate rooms and search inside cabinets and drawers to find target objects.
These new challenges require combining manipulation and navigation skills in unexplored environments.
We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills.
arXiv Detail & Related papers (2023-07-12T12:25:33Z) - HomeRobot: Open-Vocabulary Mobile Manipulation [107.05702777141178]
Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location.
HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch.
arXiv Detail & Related papers (2023-06-20T14:30:32Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces [8.692025477306212]
Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
arXiv Detail & Related papers (2021-07-04T15:42:01Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents [18.869243389210492]
URoboSim is a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
arXiv Detail & Related papers (2020-12-08T14:23:24Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.