Predicting the Physical Dynamics of Unseen 3D Objects
- URL: http://arxiv.org/abs/2001.06291v1
- Date: Thu, 16 Jan 2020 06:27:59 GMT
- Title: Predicting the Physical Dynamics of Unseen 3D Objects
- Authors: Davis Rempe, Srinath Sridhar, He Wang, Leonidas J. Guibas
- Abstract summary: We focus on predicting the dynamics of 3D objects on a plane that have just been subjected to an impulsive force.
Our approach can generalize to object shapes and initial conditions that were unseen during training.
Our model can support training with data from both a physics engine or the real world.
- Score: 65.49291702488436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machines that can predict the effect of physical interactions on the dynamics
of previously unseen object instances are important for creating better robots
and interactive virtual worlds. In this work, we focus on predicting the
dynamics of 3D objects on a plane that have just been subjected to an impulsive
force. In particular, we predict the changes in state - 3D position, rotation,
velocities, and stability. Different from previous work, our approach can
generalize dynamics predictions to object shapes and initial conditions that
were unseen during training. Our method takes the 3D object's shape as a point
cloud and its initial linear and angular velocities as input. We extract shape
features and use a recurrent neural network to predict the full change in state
at each time step. Our model can support training with data from both a physics
engine or the real world. Experiments show that we can accurately predict the
changes in state for unseen object geometries and initial conditions.
Related papers
- Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion [35.71595369663293]
We propose textbfPhysics3D, a novel method for learning various physical properties of 3D objects through a video diffusion model.
Our approach involves designing a highly generalizable physical simulation system based on a viscoelastic material model.
Experiments demonstrate the effectiveness of our method with both elastic and plastic materials.
arXiv Detail & Related papers (2024-06-06T17:59:47Z) - PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation [62.53760963292465]
PhysDreamer is a physics-based approach that endows static 3D objects with interactive dynamics.
We present our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study.
arXiv Detail & Related papers (2024-04-19T17:41:05Z) - Physics-Encoded Graph Neural Networks for Deformation Prediction under
Contact [87.69278096528156]
In robotics, it's crucial to understand object deformation during tactile interactions.
We introduce a method using Physics-Encoded Graph Neural Networks (GNNs) for such predictions.
We've made our code and dataset public to advance research in robotic simulation and grasping.
arXiv Detail & Related papers (2024-02-05T19:21:52Z) - 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive
Physics under Challenging Scenes [68.66237114509264]
We present a framework capable of learning 3D-grounded visual intuitive physics models from videos of complex scenes with fluids.
We show our model can make long-horizon future predictions by learning from raw images and significantly outperforms models that do not employ an explicit 3D representation space.
arXiv Detail & Related papers (2023-04-22T19:28:49Z) - Towards an Interpretable Latent Space in Structured Models for Video
Prediction [30.080907495461876]
We focus on the task of future frame prediction in video governed by underlying physical dynamics.
We work with models which are object-centric, i.e., explicitly work with object representations, and propagate a loss in the latent space.
arXiv Detail & Related papers (2021-07-16T05:37:16Z) - 3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators [24.181604511269096]
We propose an action-conditioned dynamics model that predicts scene changes caused by object and agent interactions in a viewpoint-in 3D neural scene representation space.
In this space, objects do not interfere with one another and their appearance persists over time and across viewpoints.
We show our model generalizes well its predictions across varying number and appearances of interacting objects as well as across camera viewpoints.
arXiv Detail & Related papers (2020-11-12T16:15:52Z) - Hindsight for Foresight: Unsupervised Structured Dynamics Models from
Physical Interaction [24.72947291987545]
Key challenge for an agent learning to interact with the world is to reason about physical properties of objects.
We propose a novel approach for modeling the dynamics of a robot's interactions directly from unlabeled 3D point clouds and images.
arXiv Detail & Related papers (2020-08-02T11:04:49Z) - Visual Grounding of Learned Physical Models [66.04898704928517]
Humans intuitively recognize objects' physical properties and predict their motion, even when the objects are engaged in complicated interactions.
We present a neural model that simultaneously reasons about physics and makes future predictions based on visual and dynamics priors.
Experiments show that our model can infer the physical properties within a few observations, which allows the model to quickly adapt to unseen scenarios and make accurate predictions into the future.
arXiv Detail & Related papers (2020-04-28T17:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.