PMR: Physical Model-Driven Multi-Stage Restoration of Turbulent Dynamic Videos
- URL: http://arxiv.org/abs/2508.00406v1
- Date: Fri, 01 Aug 2025 08:06:41 GMT
- Title: PMR: Physical Model-Driven Multi-Stage Restoration of Turbulent Dynamic Videos
- Authors: Tao Wu, Jingyuan Ye, Ying Fu,
- Abstract summary: We introduce a Dynamic Efficiency Index ($DEI$), which combines turbulence intensity, optical flow, and proportions of dynamic regions to accurately quantify video dynamic intensity under varying turbulence conditions.<n>We also propose a Physical Model-Driven Multi-Stage Video Restoration ($PMR$) framework that consists of three stages: textbfde-tilting for geometric stabilization, textbfmotion segmentation enhancement, and textbfde-blurring for quality restoration.<n>$PMR$ employs lightweight backbones and stage-wise joint training to ensure both efficiency and high restoration quality.
- Score: 9.48544376032391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometric distortions and blurring caused by atmospheric turbulence degrade the quality of long-range dynamic scene videos. Existing methods struggle with restoring edge details and eliminating mixed distortions, especially under conditions of strong turbulence and complex dynamics. To address these challenges, we introduce a Dynamic Efficiency Index ($DEI$), which combines turbulence intensity, optical flow, and proportions of dynamic regions to accurately quantify video dynamic intensity under varying turbulence conditions and provide a high-dynamic turbulence training dataset. Additionally, we propose a Physical Model-Driven Multi-Stage Video Restoration ($PMR$) framework that consists of three stages: \textbf{de-tilting} for geometric stabilization, \textbf{motion segmentation enhancement} for dynamic region refinement, and \textbf{de-blurring} for quality restoration. $PMR$ employs lightweight backbones and stage-wise joint training to ensure both efficiency and high restoration quality. Experimental results demonstrate that the proposed method effectively suppresses motion trailing artifacts, restores edge details and exhibits strong generalization capability, especially in real-world scenarios characterized by high-turbulence and complex dynamics. We will make the code and datasets openly available.
Related papers
- Laplacian Analysis Meets Dynamics Modelling: Gaussian Splatting for 4D Reconstruction [9.911802466255653]
We propose a novel dynamic 3DGS framework with hybrid explicit-implicit functions.<n>Our method demonstrates state-of-the-art performance in reconstructing complex dynamic scenes, achieving better reconstruction fidelity.
arXiv Detail & Related papers (2025-08-07T01:39:29Z) - VDEGaussian: Video Diffusion Enhanced 4D Gaussian Splatting for Dynamic Urban Scenes Modeling [68.65587507038539]
We present a novel video diffusion-enhanced 4D Gaussian Splatting framework for dynamic urban scene modeling.<n>Our key insight is to distill robust, temporally consistent priors from a test-time adapted video diffusion model.<n>Our method significantly enhances dynamic modeling, especially for fast-moving objects, achieving an approximate PSNR gain of 2 dB.
arXiv Detail & Related papers (2025-08-04T07:24:05Z) - DynaSplat: Dynamic-Static Gaussian Splatting with Hierarchical Motion Decomposition for Scene Reconstruction [9.391616497099422]
We present DynaSplat, an approach that extends Gaussian Splatting to dynamic scenes.<n>We classify scene elements as static or dynamic through a novel fusion of deformation offset statistics and 2D motion flow consistency.<n>We then introduce a hierarchical motion modeling strategy that captures both coarse global transformations and fine-grained local movements.
arXiv Detail & Related papers (2025-06-11T15:13:35Z) - HAIF-GS: Hierarchical and Induced Flow-Guided Gaussian Splatting for Dynamic Scene [11.906835503107189]
We propose HAIF-GS, a unified framework that enables structured and consistent dynamic modeling through sparse anchor-driven deformation.<n>We show that HAIF-GS significantly outperforms prior dynamic 3DGS methods in rendering quality, temporal coherence, and reconstruction efficiency.
arXiv Detail & Related papers (2025-06-11T08:45:08Z) - RAGME: Retrieval Augmented Video Generation for Enhanced Motion Realism [73.38167494118746]
We propose a framework to improve the realism of motion in generated videos.<n>We advocate for the incorporation of a retrieval mechanism during the generation phase.<n>Our pipeline is designed to apply to any text-to-video diffusion model.
arXiv Detail & Related papers (2025-04-09T08:14:05Z) - Event-boosted Deformable 3D Gaussians for Dynamic Scene Reconstruction [50.873820265165975]
We introduce the first approach combining event cameras, which capture high-temporal-resolution, continuous motion data, with deformable 3D-GS for dynamic scene reconstruction.<n>We propose a GS-Threshold Joint Modeling strategy, creating a mutually reinforcing process that greatly improves both 3D reconstruction and threshold modeling.<n>We contribute the first event-inclusive 4D benchmark with synthetic and real-world dynamic scenes, on which our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-25T08:23:38Z) - Adaptive and Temporally Consistent Gaussian Surfels for Multi-view Dynamic Reconstruction [3.9363268745580426]
AT-GS is a novel method for reconstructing high-quality dynamic surfaces from multi-view videos through per-frame incremental optimization.
We reduce temporal jittering in dynamic surfaces by ensuring consistency in curvature maps across consecutive frames.
Our method achieves superior accuracy and temporal coherence in dynamic surface reconstruction, delivering high-fidelity space-time novel view synthesis.
arXiv Detail & Related papers (2024-11-10T21:30:16Z) - Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence [10.8380383565446]
This paper presents the first segment-then-restore pipeline for restoring the videos of dynamic scenes in turbulent environment.
We leverage mean optical flow with an unsupervised motion segmentation method to separate dynamic and static scene components prior to restoration.
Benchmarked against existing restoration methods, our approach restores most of the geometric distortion and enhances sharpness for videos.
arXiv Detail & Related papers (2024-04-21T10:28:34Z) - Robust Dynamic Radiance Fields [79.43526586134163]
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms.
We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters.
arXiv Detail & Related papers (2023-01-05T18:59:51Z) - Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and A
New Physics-Inspired Transformer Model [82.23276183684001]
We propose a physics-inspired transformer model for imaging through atmospheric turbulence.
The proposed network utilizes the power of transformer blocks to jointly extract a dynamical turbulence distortion map.
We present two new real-world turbulence datasets that allow for evaluation with both classical objective metrics and a new task-driven metric using text recognition accuracy.
arXiv Detail & Related papers (2022-07-20T17:09:16Z) - ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object
Manipulation [135.10594078615952]
We introduce ACID, an action-conditional visual dynamics model for volumetric deformable objects.
A benchmark contains over 17,000 action trajectories with six types of plush toys and 78 variants.
Our model achieves the best performance in geometry, correspondence, and dynamics predictions.
arXiv Detail & Related papers (2022-03-14T04:56:55Z) - Image Reconstruction of Static and Dynamic Scenes through Anisoplanatic
Turbulence [1.6114012813668934]
We present a unified method for atmospheric turbulence mitigation in both static and dynamic sequences.
We are able to achieve better results compared to existing methods by utilizing a novel space-time non-local averaging method.
arXiv Detail & Related papers (2020-08-31T19:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.