BurstDeflicker: A Benchmark Dataset for Flicker Removal in Dynamic Scenes
- URL: http://arxiv.org/abs/2510.09996v1
- Date: Sat, 11 Oct 2025 03:46:34 GMT
- Title: BurstDeflicker: A Benchmark Dataset for Flicker Removal in Dynamic Scenes
- Authors: Lishen Qu, Zhihao Liu, Shihao Zhou, Yaqi Luo, Jie Liang, Hui Zeng, Lei Zhang, Jufeng Yang,
- Abstract summary: We present BurstDeflicker, a scalable benchmark constructed using three complementary data acquisition strategies.<n>First, we develop a Retinex-based synthesis pipeline that redefines the goal of flicker removal.<n>Second, we capture 4,000 real-world flicker images from different scenes, which help the model better understand the spatial and temporal characteristics of real flicker artifacts.
- Score: 36.35784556196341
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Flicker artifacts in short-exposure images are caused by the interplay between the row-wise exposure mechanism of rolling shutter cameras and the temporal intensity variations of alternating current (AC)-powered lighting. These artifacts typically appear as uneven brightness distribution across the image, forming noticeable dark bands. Beyond compromising image quality, this structured noise also affects high-level tasks, such as object detection and tracking, where reliable lighting is crucial. Despite the prevalence of flicker, the lack of a large-scale, realistic dataset has been a significant barrier to advancing research in flicker removal. To address this issue, we present BurstDeflicker, a scalable benchmark constructed using three complementary data acquisition strategies. First, we develop a Retinex-based synthesis pipeline that redefines the goal of flicker removal and enables controllable manipulation of key flicker-related attributes (e.g., intensity, area, and frequency), thereby facilitating the generation of diverse flicker patterns. Second, we capture 4,000 real-world flicker images from different scenes, which help the model better understand the spatial and temporal characteristics of real flicker artifacts and generalize more effectively to wild scenarios. Finally, due to the non-repeatable nature of dynamic scenes, we propose a green-screen method to incorporate motion into image pairs while preserving real flicker degradation. Comprehensive experiments demonstrate the effectiveness of our dataset and its potential to advance research in flicker removal.
Related papers
- PractiLight: Practical Light Control Using Foundational Diffusion Models [78.75949075070595]
PractiLight is a practical approach to light control in generated images.<n>Our key insight is that lighting relationships in an image are similar in nature to token interaction in self-attention layers.<n>We demonstrate state-of-the-art performance in terms of quality and control with proven parameter and data efficiency.
arXiv Detail & Related papers (2025-09-01T23:38:40Z) - Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for Low-Light Image Enhancement [3.174882428337821]
We propose a variational method for low-light image enhancement based on the Retinex decomposition.<n>A color correction pre-processing step is applied to the low-light image, which is then used as the observed input in the decomposition.<n>We extend the model by introducing its deep unfolding counterpart, in which the operators are replaced with learnable networks.
arXiv Detail & Related papers (2025-04-10T14:48:26Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Harmonizing Light and Darkness: A Symphony of Prior-guided Data Synthesis and Adaptive Focus for Nighttime Flare Removal [44.35766203309201]
Intense light sources often produce flares in captured images at night, which deteriorates the visual quality and negatively affects downstream applications.
In order to train an effective flare removal network, a reliable dataset is essential.
We synthesize a prior-guided dataset named Flare7K*, which contains multi-flare images where the brightness of flares adheres to the laws of illumination.
We propose a plug-and-play Adaptive Focus Module (AFM) that can adaptively mask the clean background areas and assist models in focusing on the regions severely affected by flares.
arXiv Detail & Related papers (2024-03-30T10:37:56Z) - Exposure Bracketing Is All You Need For A High-Quality Image [50.822601495422916]
Multi-exposure images are complementary in denoising, deblurring, high dynamic range imaging, and super-resolution.<n>We propose to utilize exposure bracketing photography to get a high-quality image by combining these tasks in this work.<n>In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Toward Real Flare Removal: A Comprehensive Pipeline and A New Benchmark [12.1632995709273]
We propose a well-developed methodology for generating data-pairs with flare deterioration.
The similarity of scattered flares and symmetric effect of reflected ghosts are realized.
We also construct a real-shot pipeline that respectively processes the effects of scattering and reflective flares.
arXiv Detail & Related papers (2023-06-28T02:57:25Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.