Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics
- URL: http://arxiv.org/abs/2410.08257v1
- Date: Thu, 10 Oct 2024 17:43:36 GMT
- Title: Neural Material Adaptor for Visual Grounding of Intrinsic Dynamics
- Authors: Junyi Cao, Shanyan Guan, Yanhao Ge, Wei Li, Xiaokang Yang, Chao Ma,
- Abstract summary: We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections.
We also propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images.
- Score: 48.99021224773799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While humans effortlessly discern intrinsic dynamics and adapt to new scenarios, modern AI systems often struggle. Current methods for visual grounding of dynamics either use pure neural-network-based simulators (black box), which may violate physical laws, or traditional physical simulators (white box), which rely on expert-defined equations that may not fully capture actual dynamics. We propose the Neural Material Adaptor (NeuMA), which integrates existing physical laws with learned corrections, facilitating accurate learning of actual dynamics while maintaining the generalizability and interpretability of physical priors. Additionally, we propose Particle-GS, a particle-driven 3D Gaussian Splatting variant that bridges simulation and observed images, allowing back-propagate image gradients to optimize the simulator. Comprehensive experiments on various dynamics in terms of grounded particle accuracy, dynamic rendering quality, and generalization ability demonstrate that NeuMA can accurately capture intrinsic dynamics.
Related papers
- AdaptiGraph: Material-Adaptive Graph-Based Neural Dynamics for Robotic Manipulation [30.367498271886866]
This paper introduces AdaptiGraph, a learning-based dynamics modeling approach.
It enables robots to predict, adapt to, and control a wide array of challenging deformable materials.
On prediction and manipulation tasks involving a diverse set of real-world deformable objects, our method exhibits superior prediction accuracy and task proficiency.
arXiv Detail & Related papers (2024-07-10T17:57:04Z) - Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - A Physics-embedded Deep Learning Framework for Cloth Simulation [6.8806198396336935]
This paper proposes a physics-embedded learning framework that directly encodes physical features of cloth simulation.
The framework can also integrate with other external forces and collision handling through either traditional simulators or sub neural networks.
arXiv Detail & Related papers (2024-03-19T15:21:00Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Learning to Simulate Unseen Physical Systems with Graph Neural Networks [13.202870928432045]
"Graph-based Physics Engine" is a machine learning method embedded with physical priors and material parameters.
We demonstrate that GPE can generalize to materials with different properties not seen in the training set.
In addition, introducing the law of momentum conservation in the model significantly improves the efficiency and stability of learning.
arXiv Detail & Related papers (2022-01-28T07:56:46Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Neural Physicist: Learning Physical Dynamics from Image Sequences [0.6445605125467573]
We present a novel architecture named Neural Physicist (NeurPhy) to learn physical dynamics directly from image sequences using deep neural networks.
Our model can not only extract the physically meaningful state representations, but also learn the state transition dynamics enabling long-term predictions for unseen image sequences.
arXiv Detail & Related papers (2020-06-09T04:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.