Learning to Remove Lens Flare in Event Camera
- URL: http://arxiv.org/abs/2512.09016v1
- Date: Tue, 09 Dec 2025 18:59:57 GMT
- Title: Learning to Remove Lens Flare in Event Camera
- Authors: Haiqian Han, Lingdong Kong, Jianing Li, Ao Liang, Chengtao Zhu, Jiacheng Lyu, Lai Xing Ng, Xiangyang Ji, Wei Tsang Ooi, Benoit R. Cottereau,
- Abstract summary: We present E-DeflareDeflare, the first framework for removing lens flare from event camera data.<n>We first establish the theoretical foundation by deriving a physics-grounded forward model of the non-linear suppression mechanism.<n> Empowered by this benchmark, we design E-DeflareNet, which achieves state-of-the-art restoration performance.
- Score: 56.9171469873838
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Event cameras have the potential to revolutionize vision systems with their high temporal resolution and dynamic range, yet they remain susceptible to lens flare, a fundamental optical artifact that causes severe degradation. In event streams, this optical artifact forms a complex, spatio-temporal distortion that has been largely overlooked. We present E-Deflare, the first systematic framework for removing lens flare from event camera data. We first establish the theoretical foundation by deriving a physics-grounded forward model of the non-linear suppression mechanism. This insight enables the creation of the E-Deflare Benchmark, a comprehensive resource featuring a large-scale simulated training set, E-Flare-2.7K, and the first-ever paired real-world test set, E-Flare-R, captured by our novel optical system. Empowered by this benchmark, we design E-DeflareNet, which achieves state-of-the-art restoration performance. Extensive experiments validate our approach and demonstrate clear benefits for downstream tasks. Code and datasets are publicly available.
Related papers
- PI-Light: Physics-Inspired Diffusion for Full-Image Relighting [26.42056487076843]
We introduce Physics-Inspired diffusion for full-image reLight ($$-Light, or PI-Light), a two-stage framework that leverages physics-inspired diffusion models.<n>Our design incorporates (i) batch-aware attention, (ii) a physics-guided neural rendering module that enforces physically plausible light transport, and (iii) physics-inspired losses that regularize training dynamics toward a physically meaningful landscape.<n>Experiments demonstrate that $$-Light synthesizes specular highlights and diffuse reflections across a wide variety of materials, achieving superior generalization to real-world scenes compared with prior approaches.
arXiv Detail & Related papers (2026-01-29T18:55:36Z) - OmniLens++: Blind Lens Aberration Correction via Large LensLib Pre-Training and Latent PSF Representation [72.72583225885636]
This work proposes the OmniLens++ framework, which resolves two challenges that hinder the generalization ability of existing pipelines.<n>Experiments on diverse aberrations of real-world lenses and synthetic LensLib show that OmniLens++ exhibits state-of-the-art generalization capacity in blind aberration correction.
arXiv Detail & Related papers (2025-11-21T10:41:54Z) - Bidirectional Image-Event Guided Fusion Framework for Low-Light Image Enhancement [24.5584423318892]
Under extreme low-light conditions, frame-based cameras suffer from severe detail loss due to limited dynamic range.<n>Recent studies have introduced event cameras for event-guided low-light image enhancement.<n>We propose BiLIE, a Bidirectional image-event guided fusion framework for Low-Light Image Enhancement.
arXiv Detail & Related papers (2025-06-06T14:28:17Z) - LensNet: An End-to-End Learning Framework for Empirical Point Spread Function Modeling and Lensless Imaging Reconstruction [32.85180149439811]
Lensless imaging stands out as a promising alternative to conventional lens-based systems.<n>Traditional lensless techniques often require explicit calibrations and extensive pre-processing.<n>We propose LensNet, an end-to-end deep learning framework that integrates spatial-domain and frequency-domain representations.
arXiv Detail & Related papers (2025-05-03T09:11:52Z) - RealRAG: Retrieval-augmented Realistic Image Generation via Self-reflective Contrastive Learning [54.07026389388881]
We present the first real-object-based retrieval-augmented generation framework (RealRAG)<n>RealRAG augments fine-grained and unseen novel object generation by learning and retrieving real-world images to overcome the knowledge gaps of generative models.<n>Our framework integrates fine-grained visual knowledge for the generative models, tackling the distortion problem and improving the realism for fine-grained object generation.
arXiv Detail & Related papers (2025-02-02T16:41:54Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - ADFactory: An Effective Framework for Generalizing Optical Flow with
Nerf [0.4532517021515834]
We introduce a novel optical flow training framework: automatic data factory (ADF)
ADF only requires RGB images as input to effectively train the optical flow network on the target data domain.
We use advanced Nerf technology to reconstruct scenes from photo groups collected by a monocular camera.
We screen the generated labels from multiple aspects, such as optical flow matching accuracy, radiation field confidence, and depth consistency.
arXiv Detail & Related papers (2023-11-07T05:21:45Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Physics-Driven Turbulence Image Restoration with Stochastic Refinement [80.79900297089176]
Image distortion by atmospheric turbulence is a critical problem in long-range optical imaging systems.
Fast and physics-grounded simulation tools have been introduced to help the deep-learning models adapt to real-world turbulence conditions.
This paper proposes the Physics-integrated Restoration Network (PiRN) to help the network to disentangle theity from the degradation and the underlying image.
arXiv Detail & Related papers (2023-07-20T05:49:21Z) - Optical Aberration Correction in Postprocessing using Imaging Simulation [17.331939025195478]
The popularity of mobile photography continues to grow.
Recent cameras have shifted some of these correction tasks from optical design to postprocessing systems.
We propose a practical method for recovering the degradation caused by optical aberrations.
arXiv Detail & Related papers (2023-05-10T03:20:39Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.