Phys4D: Fine-Grained Physics-Consistent 4D Modeling from Video Diffusion
- URL: http://arxiv.org/abs/2603.03485v1
- Date: Tue, 03 Mar 2026 20:01:43 GMT
- Title: Phys4D: Fine-Grained Physics-Consistent 4D Modeling from Video Diffusion
- Authors: Haoran Lu, Shang Wu, Jianshu Zhang, Maojiang Su, Guo Ye, Chenwei Xu, Lie Lu, Pranav Maneriker, Fan Du, Manling Li, Zhaoran Wang, Han Liu,
- Abstract summary: We present textbfPhys4D, a pipeline for learning physics-consistent 4D world representations from video diffusion models.<n>We first perform robust geometry and motion representations through large-scale pseudo-supervised pretraining.<n>We then perform physics-grounded supervised fine-tuning using simulation bootstrap-generated data, enforcing temporally consistent 4D dynamics.
- Score: 43.09536633299663
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent video diffusion models have achieved impressive capabilities as large-scale generative world models. However, these models often struggle with fine-grained physical consistency, exhibiting physically implausible dynamics over time. In this work, we present \textbf{Phys4D}, a pipeline for learning physics-consistent 4D world representations from video diffusion models. Phys4D adopts \textbf{a three-stage training paradigm} that progressively lifts appearance-driven video diffusion models into physics-consistent 4D world representations. We first bootstrap robust geometry and motion representations through large-scale pseudo-supervised pretraining, establishing a foundation for 4D scene modeling. We then perform physics-grounded supervised fine-tuning using simulation-generated data, enforcing temporally consistent 4D dynamics. Finally, we apply simulation-grounded reinforcement learning to correct residual physical violations that are difficult to capture through explicit supervision. To evaluate fine-grained physical consistency beyond appearance-based metrics, we introduce a set of \textbf{4D world consistency evaluation} that probe geometric coherence, motion stability, and long-horizon physical plausibility. Experimental results demonstrate that Phys4D substantially improves fine-grained spatiotemporal and physical consistency compared to appearance-driven baselines, while maintaining strong generative performance. Our project page is available at https://sensational-brioche-7657e7.netlify.app/
Related papers
- Learning Physics-Grounded 4D Dynamics with Neural Gaussian Force Fields [11.212256115568772]
We introduce an end-to-end neural framework that integrates 3D Gaussian perception with physics-based dynamic modeling to generate physically realistic 4D videos.<n>We also present GSCollision, a 4D Gaussian dataset featuring diverse materials, multi-object interactions, and complex scenes, totaling over 640k rendered physical videos.
arXiv Detail & Related papers (2026-01-29T11:37:41Z) - PhysWorld: From Real Videos to World Models of Deformable Objects via Physics-Aware Demonstration Synthesis [52.905353023326306]
We propose PhysWorld, a framework that synthesizes physically plausible and diverse demonstrations to learn efficient world models.<n>Experiments show that PhysWorld has competitive performance while enabling inference speeds 47 times faster than the recent state-of-the-art method, i.e., PhysTwin.
arXiv Detail & Related papers (2025-10-24T13:25:39Z) - LikePhys: Evaluating Intuitive Physics Understanding in Video Diffusion Models via Likelihood Preference [57.086932851733145]
We introduce LikePhys, a training-free method that evaluates intuitive physics in video diffusion models.<n>We benchmark intuitive physics understanding in current video diffusion models.<n> Empirical results show that, despite current models struggling with complex and chaotic dynamics, there is a clear trend of improvement in physics understanding as model capacity and inference settings scale.
arXiv Detail & Related papers (2025-10-13T15:19:07Z) - PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation [53.06495362038348]
Existing generation models excel at producing photo-realistic videos from text or images, but often lack physical plausibility and 3D controllability.<n>We introduce PhysCtrl, a novel framework for physics-grounded image-to-video generation with physical parameters and force control.<n> Experiments show that PhysCtrl generates realistic, physics-grounded motion trajectories which, when used to drive image-to-video models, yield high-fidelity, controllable videos.
arXiv Detail & Related papers (2025-09-24T17:58:04Z) - PhysGM: Large Physical Gaussian Model for Feed-Forward 4D Synthesis [37.21119648359889]
PhysGM is a feed-forward framework that jointly predicts a 3D Gaussian representation and its physical properties from a single image.<n>Our method effectively generates high-fidelity 4D simulations from a single image in one minute.
arXiv Detail & Related papers (2025-08-19T15:10:30Z) - PhysX-3D: Physical-Grounded 3D Asset Generation [48.78065667043986]
Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling.<n>We present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions.<n>We also propose textbfPhysXGen, a feed-forward framework for physics-grounded image-to-3D asset generation.
arXiv Detail & Related papers (2025-07-16T17:59:35Z) - Phys4DGen: Physics-Compliant 4D Generation with Multi-Material Composition Perception [9.355276457984603]
Phys4DGen is a novel 4D generation framework that integrates multi-material composition perception with physical simulation.<n>The framework achieves automated, physically plausible 4D generation through three innovative modules.<n> Experiments on both synthetic and real-world datasets demonstrate that Phys4DGen can generate high-fidelity 4D content with physical realism.
arXiv Detail & Related papers (2024-11-25T12:12:38Z) - Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion [35.71595369663293]
We propose textbfPhysics3D, a novel method for learning various physical properties of 3D objects through a video diffusion model.
Our approach involves designing a highly generalizable physical simulation system based on a viscoelastic material model.
Experiments demonstrate the effectiveness of our method with both elastic and plastic materials.
arXiv Detail & Related papers (2024-06-06T17:59:47Z) - DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors [75.83647027123119]
We propose to learn the physical properties of a material field with video diffusion priors.<n>We then utilize a physics-based Material-Point-Method simulator to generate 4D content with realistic motions.
arXiv Detail & Related papers (2024-06-03T16:05:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.