Continuous Exposure-Time Modeling for Realistic Atmospheric Turbulence Synthesis
- URL: http://arxiv.org/abs/2603.01398v2
- Date: Tue, 03 Mar 2026 06:23:11 GMT
- Title: Continuous Exposure-Time Modeling for Realistic Atmospheric Turbulence Synthesis
- Authors: Junwei Zeng, Dong Liang, Sheng-Jun Huang, Kun Zhan, Songcan Chen,
- Abstract summary: Atmospheric turbulence significantly degrades long-range imaging by introducing geometric warping and exposure-timedependent blur.<n>Existing methods for turbulence effects often oversimplify the relationship between blur and exposure-time.<n>We construct ET-Turb, a large-scale synthetic turbulence dataset that explicitly incorporates continuous exposure-time modeling.
- Score: 65.19146708498346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Atmospheric turbulence significantly degrades long-range imaging by introducing geometric warping and exposure-time-dependent blur, which adversely affects both visual quality and the performance of high-level vision tasks. Existing methods for synthesizing turbulence effects often oversimplify the relationship between blur and exposure-time, typically assuming fixed or binary exposure settings. This leads to unrealistic synthetic data and limited generalization capability of trained models. To address this gap, we revisit the modulation transfer function (MTF) formulation and propose a novel Exposure-Time-dependent MTF (ET-MTF) that models blur as a continuous function of exposure-time. For blur synthesis, we derive a tilt-invariant point spread function (PSF) from the ET-MTF, which, when integrated with a spatially varying blur-width field, provides a comprehensive and physically accurate characterization of turbulence-induced blur. Building on this synthesis pipeline, we construct ET-Turb, a large-scale synthetic turbulence dataset that explicitly incorporates continuous exposure-time modeling across diverse optical and atmospheric conditions. The dataset comprises 5,083 videos (2,005,835 frames), partitioned into 3,988 training and 1,095 test videos. Extensive experiments demonstrate that models trained on ET-Turb produce more realistic restorations and achieve superior generalization on real-world turbulence data compared to those trained on other datasets. The dataset is publicly available at: github.com/Jun-Wei-Zeng/ET-Turb.
Related papers
- SLCFormer: Spectral-Local Context Transformer with Physics-Grounded Flare Synthesis for Nighttime Flare Removal [12.135723445465551]
Lens flare is a common nighttime artifact caused by strong light sources scattering within camera lenses.<n>We propose SLCFormer, a novel spectral-local context transformer framework for effective nighttime lens flare removal.<n>Our method achieves state-of-the-art performance, outperforming existing approaches in both quantitative metrics and perceptual visual quality.
arXiv Detail & Related papers (2025-12-17T09:16:59Z) - FAIM: Frequency-Aware Interactive Mamba for Time Series Classification [87.84511960413715]
Time series classification (TSC) is crucial in numerous real-world applications, such as environmental monitoring, medical diagnosis, and posture recognition.<n>We propose FAIM, a lightweight Frequency-Aware Interactive Mamba model.<n>We show that FAIM consistently outperforms existing state-of-the-art (SOTA) methods, achieving a superior trade-off between accuracy and efficiency.
arXiv Detail & Related papers (2025-11-26T08:36:33Z) - TIMED: Adversarial and Autoregressive Refinement of Diffusion-Based Time Series Generation [0.31498833540989407]
TIMED is a unified generative framework that captures global structure via a forward-reverse diffusion process.<n>To further align the real and synthetic distributions in feature space, TIMED incorporates a Maximum Mean Discrepancy (MMD) loss.<n>We show that TIMED generates more realistic and temporally coherent sequences than state-of-the-art generative models.
arXiv Detail & Related papers (2025-09-23T23:05:40Z) - EGTM: Event-guided Efficient Turbulence Mitigation [19.09752432962073]
Turbulence mitigation (TM) aims to remove the distortions and blurs introduced by atmospheric turbulence into frame cameras.<n>We present a novel EGTM framework that extracts pixel-level reliable turbulence-free guidance from noisy turbulent events for temporal lucky fusion.<n>We build the first turbulence data acquisition system to contribute the first real-world event-driven TM dataset.
arXiv Detail & Related papers (2025-09-04T01:49:13Z) - FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via Frequency-Decoupled Alignment and Degradation-Robust Fusion [92.4205087439928]
Image-event joint depth estimation methods leverage complementary modalities for robust perception, yet face challenges in generalizability.<n>We propose the Self-supervised Transfer (PST) and the FrequencyDe-coupled Fusion module (FreDF)<n>PST establishes cross-modal knowledge transfer through latent space alignment with image foundation models, effectively mitigating data scarcity.<n>FreDF explicitly decouples high-frequency edge features from low-frequency structural components, resolving modality-specific frequency mismatches.<n>This combined approach enables FUSE to construct a universal image-event that only requires lightweight decoder adaptation for target datasets.
arXiv Detail & Related papers (2025-03-25T15:04:53Z) - Unpaired Deblurring via Decoupled Diffusion Model [55.21345354747609]
We propose UID-Diff, a generative-diffusion-based model designed to enhance deblurring performance on unknown domains.<n>We employ two Q-Formers as structural features and blur patterns extractors separately. The features extracted will be used for the supervised deblurring task on synthetic data and the unsupervised blur-transfer task.<n>Experiments on real-world datasets demonstrate that UID-Diff outperforms existing state-of-the-art methods in blur removal and structural preservation.
arXiv Detail & Related papers (2025-02-03T17:00:40Z) - Atmospheric Turbulence Correction via Variational Deep Diffusion [23.353013333671335]
Diffusion models have shown impressive accomplishments in photo-realistic image synthesis and beyond.
We propose a novel deep conditional diffusion model under a variational inference framework to solve the Atmospheric Turbulence correction problem.
arXiv Detail & Related papers (2023-05-08T22:35:07Z) - Blur Interpolation Transformer for Real-World Motion from Blur [52.10523711510876]
We propose a encoded blur transformer (BiT) to unravel the underlying temporal correlation in blur.
Based on multi-scale residual Swin transformer blocks, we introduce dual-end temporal supervision and temporally symmetric ensembling strategies.
In addition, we design a hybrid camera system to collect the first real-world dataset of one-to-many blur-sharp video pairs.
arXiv Detail & Related papers (2022-11-21T13:10:10Z) - AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models [64.24948495708337]
Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion.
Various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed.
Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images.
arXiv Detail & Related papers (2022-08-24T03:13:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.