VideoPDE: Unified Generative PDE Solving via Video Inpainting Diffusion Models
- URL: http://arxiv.org/abs/2506.13754v2
- Date: Tue, 17 Jun 2025 02:15:17 GMT
- Title: VideoPDE: Unified Generative PDE Solving via Video Inpainting Diffusion Models
- Authors: Edward Li, Zichen Wang, Jiahe Huang, Jeong Joon Park,
- Abstract summary: We present a unified framework for solving partial differential equations (PDEs) using video-inpainting diffusion transformer models.<n>Our method proposes pixel-space video diffusion models for fine-grained, high-fidelity inpainting and conditioning.<n>Our method offers an accurate and versatile solution across a wide range of PDEs and problem setups, outperforming state-of-the-art baselines.
- Score: 8.189440319895823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a unified framework for solving partial differential equations (PDEs) using video-inpainting diffusion transformer models. Unlike existing methods that devise specialized strategies for either forward or inverse problems under full or partial observation, our approach unifies these tasks under a single, flexible generative framework. Specifically, we recast PDE-solving as a generalized inpainting problem, e.g., treating forward prediction as inferring missing spatiotemporal information of future states from initial conditions. To this end, we design a transformer-based architecture that conditions on arbitrary patterns of known data to infer missing values across time and space. Our method proposes pixel-space video diffusion models for fine-grained, high-fidelity inpainting and conditioning, while enhancing computational efficiency through hierarchical modeling. Extensive experiments show that our video inpainting-based diffusion model offers an accurate and versatile solution across a wide range of PDEs and problem setups, outperforming state-of-the-art baselines.
Related papers
- Beyond Blur: A Fluid Perspective on Generative Diffusion Models [1.7624347338410744]
We propose a novel PDE-driven corruption process for generative image synthesis based on advection-diffusion processes.<n>This work bridges fluid dynamics, dimensionless PDE theory, and deep generative modeling, offering a fresh perspective on physically informed image corruption processes.
arXiv Detail & Related papers (2025-06-20T08:31:30Z) - Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation [19.734778762515468]
diffusion models have gained increasing attention in the modeling of physical systems, particularly those governed by partial differential equations (PDEs)<n>We propose a simple yet effective post-hoc distillation approach, where PDE constraints are not injected directly into the diffusion process, but instead enforced during a post-hoc distillation stage.
arXiv Detail & Related papers (2025-05-28T14:17:58Z) - Autoregressive Video Generation without Vector Quantization [90.87907377618747]
We reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction.<n>With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA.<n>Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity.
arXiv Detail & Related papers (2024-12-18T18:59:53Z) - VISION-XL: High Definition Video Inverse Problem Solver using Latent Image Diffusion Models [58.464465016269614]
We propose a novel framework for solving high-definition video inverse problems using latent image diffusion models.<n>Our approach delivers HD-resolution reconstructions in under 6 seconds per frame on a single NVIDIA 4090 GPU.
arXiv Detail & Related papers (2024-11-29T08:10:49Z) - Solving Video Inverse Problems Using Image Diffusion Models [58.464465016269614]
We introduce an innovative video inverse solver that leverages only image diffusion models.<n>Our method treats the time dimension of a video as the batch dimension image diffusion models.<n>We also introduce a batch-consistent sampling strategy that encourages consistency across batches.
arXiv Detail & Related papers (2024-09-04T09:48:27Z) - DiffusionPDE: Generative PDE-Solving Under Partial Observation [10.87702379899977]
We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models.
We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation.
arXiv Detail & Related papers (2024-06-25T17:48:24Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - VIDM: Video Implicit Diffusion Models [75.90225524502759]
Diffusion models have emerged as a powerful generative method for synthesizing high-quality and diverse set of images.
We propose a video generation method based on diffusion models, where the effects of motion are modeled in an implicit condition.
We improve the quality of the generated videos by proposing multiple strategies such as sampling space truncation, robustness penalty, and positional group normalization.
arXiv Detail & Related papers (2022-12-01T02:58:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.