Learning an Implicit Physics Model for Image-based Fluid Simulation
- URL: http://arxiv.org/abs/2508.08254v1
- Date: Mon, 11 Aug 2025 17:59:58 GMT
- Title: Learning an Implicit Physics Model for Image-based Fluid Simulation
- Authors: Emily Yue-Ting Jia, Jiageng Mao, Zhiyuan Gao, Yajie Zhao, Yue Wang,
- Abstract summary: Humans possess an exceptional ability to imagine 4D scenes, encompassing both motion and 3D geometry, from a single still image.<n>In this paper, we aim to replicate this capacity in neural networks, specifically focusing on natural fluid imagery.<n>Our approach introduces a novel method for generating 4D scenes with physics-consistent animation from a single image.
- Score: 11.273649912979055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans possess an exceptional ability to imagine 4D scenes, encompassing both motion and 3D geometry, from a single still image. This ability is rooted in our accumulated observations of similar scenes and an intuitive understanding of physics. In this paper, we aim to replicate this capacity in neural networks, specifically focusing on natural fluid imagery. Existing methods for this task typically employ simplistic 2D motion estimators to animate the image, leading to motion predictions that often defy physical principles, resulting in unrealistic animations. Our approach introduces a novel method for generating 4D scenes with physics-consistent animation from a single image. We propose the use of a physics-informed neural network that predicts motion for each surface point, guided by a loss term derived from fundamental physical principles, including the Navier-Stokes equations. To capture appearance, we predict feature-based 3D Gaussians from the input image and its estimated depth, which are then animated using the predicted motions and rendered from any desired camera perspective. Experimental results highlight the effectiveness of our method in producing physically plausible animations, showcasing significant performance improvements over existing methods. Our project page is https://physfluid.github.io/ .
Related papers
- PhysHMR: Learning Humanoid Control Policies from Vision for Physically Plausible Human Motion Reconstruction [52.44375492811009]
We present PhysHMR, a unified framework that learns a visual-to-action policy for humanoid control in a physics-based simulator.<n>A key component of our approach is the pixel-as-ray strategy, which lifts 2D keypoints into 3D spatial rays and transforms them into global space.<n>PhysHMR produces high-fidelity, physically plausible motion across diverse scenarios, outperforming prior approaches in both visual accuracy and physical realism.
arXiv Detail & Related papers (2025-10-02T21:01:11Z) - FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity [15.375932203870594]
We aim to model 3D scene geometry, appearance, and the underlying physics purely from multi-view videos.<n>In this paper, we propose FreeGave to learn the physics of complex dynamic 3D scenes without needing any object priors.
arXiv Detail & Related papers (2025-06-09T15:31:25Z) - PhysGen3D: Crafting a Miniature Interactive World from a Single Image [31.41059199853702]
PhysGen3D is a novel framework that transforms a single image into an amodal, camera-centric, interactive 3D scene.<n>At its core, PhysGen3D estimates 3D shapes, poses, physical and lighting properties of objects.<n>We evaluate PhysGen3D's performance against closed-source state-of-the-art (SOTA) image-to-video models, including Pika, Kling, and Gen-3.
arXiv Detail & Related papers (2025-03-26T17:31:04Z) - Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors [75.83647027123119]
We propose to learn the physical properties of a material field with video diffusion priors.<n>We then utilize a physics-based Material-Point-Method simulator to generate 4D content with realistic motions.
arXiv Detail & Related papers (2024-06-03T16:05:25Z) - PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations [62.14943588289551]
We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human.
PhysAvatar reconstructs avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data.
arXiv Detail & Related papers (2024-04-05T21:44:57Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.