SPACE: A Simulator for Physical Interactions and Causal Learning in 3D
Environments
- URL: http://arxiv.org/abs/2108.06180v1
- Date: Fri, 13 Aug 2021 11:49:46 GMT
- Title: SPACE: A Simulator for Physical Interactions and Causal Learning in 3D
Environments
- Authors: Jiafei Duan, Samson Yu Bai Jian, Cheston Tan
- Abstract summary: We introduce SPACE: A Simulator for Physical Interactions and Causal Learning in 3D Environments.
Inspired by daily object interactions, the SPACE dataset comprises videos depicting three types of physical events: containment, stability and contact.
We show that the SPACE dataset improves the learning of intuitive physics with an approach inspired by curriculum learning.
- Score: 2.105564340986074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in deep learning, computer vision, and embodied AI have
given rise to synthetic causal reasoning video datasets. These datasets
facilitate the development of AI algorithms that can reason about physical
interactions between objects. However, datasets thus far have primarily focused
on elementary physical events such as rolling or falling. There is currently a
scarcity of datasets that focus on the physical interactions that humans
perform daily with objects in the real world. To address this scarcity, we
introduce SPACE: A Simulator for Physical Interactions and Causal Learning in
3D Environments. The SPACE simulator allows us to generate the SPACE dataset, a
synthetic video dataset in a 3D environment, to systematically evaluate
physics-based models on a range of physical causal reasoning tasks. Inspired by
daily object interactions, the SPACE dataset comprises videos depicting three
types of physical events: containment, stability and contact. These events make
up the vast majority of the basic physical interactions between objects. We
then further evaluate it with a state-of-the-art physics-based deep model and
show that the SPACE dataset improves the learning of intuitive physics with an
approach inspired by curriculum learning. Repository:
https://github.com/jiafei1224/SPACE
Related papers
- Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion [35.71595369663293]
We propose textbfPhysics3D, a novel method for learning various physical properties of 3D objects through a video diffusion model.
Our approach involves designing a highly generalizable physical simulation system based on a viscoelastic material model.
Experiments demonstrate the effectiveness of our method with both elastic and plastic materials.
arXiv Detail & Related papers (2024-06-06T17:59:47Z) - PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation [62.53760963292465]
PhysDreamer is a physics-based approach that endows static 3D objects with interactive dynamics.
We present our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study.
arXiv Detail & Related papers (2024-04-19T17:41:05Z) - Full-Body Articulated Human-Object Interaction [61.01135739641217]
CHAIRS is a large-scale motion-captured f-AHOI dataset consisting of 16.2 hours of versatile interactions.
CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process.
By learning the geometrical relationships in HOI, we devise the very first model that leverage human pose estimation.
arXiv Detail & Related papers (2022-12-20T19:50:54Z) - BEHAVE: Dataset and Method for Tracking Human Object Interactions [105.77368488612704]
We present the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them.
We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup.
arXiv Detail & Related papers (2022-04-14T13:21:19Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - Hindsight for Foresight: Unsupervised Structured Dynamics Models from
Physical Interaction [24.72947291987545]
Key challenge for an agent learning to interact with the world is to reason about physical properties of objects.
We propose a novel approach for modeling the dynamics of a robot's interactions directly from unlabeled 3D point clouds and images.
arXiv Detail & Related papers (2020-08-02T11:04:49Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.