PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?
- URL: http://arxiv.org/abs/2503.05333v1
- Date: Fri, 07 Mar 2025 11:19:13 GMT
- Title: PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?
- Authors: Martin Spitznagel, Jan Vaillant, Janis Keuper,
- Abstract summary: We propose to investigate the potential of generative models in the context of physical simulations.<n>We provide a dataset of 300k image-pairs and baseline evaluations for three different physical simulation tasks.
- Score: 7.1606014219358425
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The image-to-image translation abilities of generative learning models have recently made significant progress in the estimation of complex (steered) mappings between image distributions. While appearance based tasks like image in-painting or style transfer have been studied at length, we propose to investigate the potential of generative models in the context of physical simulations. Providing a dataset of 300k image-pairs and baseline evaluations for three different physical simulation tasks, we propose a benchmark to investigate the following research questions: i) are generative models able to learn complex physical relations from input-output image pairs? ii) what speedups can be achieved by replacing differential equation based simulations? While baseline evaluations of different current models show the potential for high speedups (ii), these results also show strong limitations toward the physical correctness (i). This underlines the need for new methods to enforce physical correctness. Data, baseline models and evaluation code http://www.physics-gen.org.
Related papers
- SimuScene: Training and Benchmarking Code Generation to Simulate Physical Scenarios [71.65387146697319]
Large language models (LLMs) have been extensively studied for tasks like math competitions, complex coding, and scientific reasoning.<n>We propose SimuScene, the first systematic study that trains and evaluates LLMs on simulating physical scenarios.<n>We build an automatic pipeline to collect data, with human verification to ensure quality.
arXiv Detail & Related papers (2026-02-11T13:26:02Z) - Chain of Time: In-Context Physical Simulation with Image Generation Models [11.493192167966846]
Chain of Time is motivated by in-context reasoning in machine learning, as well as mental simulation in humans.<n>We apply the Chain-of-Time method to synthetic and real-world domains, including 2-D graphics simulations and natural 3-D videos.<n>Using Chain-of-Time simulation substantially improves the performance of a state-of-the-art image generation model.
arXiv Detail & Related papers (2025-10-30T21:46:26Z) - PhysiX: A Foundation Model for Physics Simulations [27.359872113159405]
We introduce PhysiX, the first large-scale foundation model for physics simulation.<n>We show that PhysiX effectively addresses the data bottleneck, outperforming task-specific baselines.<n>Our results indicate that knowledge learned from natural videos can be successfully transferred to physics simulation.
arXiv Detail & Related papers (2025-06-21T18:10:12Z) - PhysMotion: Physics-Grounded Dynamics From a Single Image [24.096925413047217]
We introduce PhysMotion, a novel framework that leverages principled physics-based simulations to guide intermediate 3D representations generated from a single image and input conditions.<n>Our approach addresses the limitations of traditional data-driven generative models and result in more consistent physically plausible motions.
arXiv Detail & Related papers (2024-11-26T07:59:11Z) - PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation [29.831214435147583]
We present PhysGen, a novel image-to-video generation method.
It produces a realistic, physically plausible, and temporally consistent video.
Our key insight is to integrate model-based physical simulation with a data-driven video generation process.
arXiv Detail & Related papers (2024-09-27T17:59:57Z) - Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - PhyRecon: Physically Plausible Neural Scene Reconstruction [81.73129450090684]
We introduce PHYRECON, the first approach to leverage both differentiable rendering and differentiable physics simulation to learn implicit surface representations.
Central to this design is an efficient transformation between SDF-based implicit representations and explicit surface points.
Our results also exhibit superior physical stability in physical simulators, with at least a 40% improvement across all datasets.
arXiv Detail & Related papers (2024-04-25T15:06:58Z) - PhysGraph: Physics-Based Integration Using Graph Neural Networks [9.016253794897874]
We focus on the detail enhancement of coarse clothing geometry which has many applications including computer games, virtual reality and virtual try-on.
Our contribution is based on a simple observation: evaluating forces is computationally relatively cheap for traditional simulation methods.
We demonstrate that this idea leads to a learnable module that can be trained on basic internal forces of small mesh patches.
arXiv Detail & Related papers (2023-01-27T16:47:10Z) - Conditional Generative Models for Simulation of EMG During Naturalistic
Movements [45.698312905115955]
We present a conditional generative neural network trained adversarially to generate motor unit activation potential waveforms.
We demonstrate the ability of such a model to predictively interpolate between a much smaller number of numerical model's outputs with a high accuracy.
arXiv Detail & Related papers (2022-11-03T14:49:02Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - BPLF: A Bi-Parallel Linear Flow Model for Facial Expression Generation
from Emotion Set Images [0.0]
Flow-based generative model is a deep learning generative model, which obtains the ability to generate data by explicitly learning the data distribution.
In this paper, a bi-parallel linear flow model for facial emotion generation from emotion set images is constructed.
This paper sorted out the current public data set of facial emotion images, made a new emotion data, and verified the model through this data set.
arXiv Detail & Related papers (2021-05-27T09:37:09Z) - gradSim: Differentiable simulation for system identification and
visuomotor control [66.37288629125996]
We present gradSim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering.
Our unified graph enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision.
arXiv Detail & Related papers (2021-04-06T16:32:01Z) - Learning to Simulate Complex Physics with Graph Networks [68.43901833812448]
We present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains.
Our framework---which we term "Graph Network-based Simulators" (GNS)--represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time.
arXiv Detail & Related papers (2020-02-21T16:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.