High Dynamic Range Imaging Based on an Asymmetric Event-SVE Camera System
- URL: http://arxiv.org/abs/2603.00467v1
- Date: Sat, 28 Feb 2026 05:02:41 GMT
- Title: High Dynamic Range Imaging Based on an Asymmetric Event-SVE Camera System
- Authors: Pengju Sun, Banglei Guan, Jing Tao, Zhenbao Yu, Xuanyu Bai, Yang Shang, Qifeng Yu,
- Abstract summary: Event cameras provide microsecond temporal resolution and high dynamic range, while spatially varying exposure (SVE) sensors offer single-shot radiometric diversity.<n>We present a hardware--algorithm co-designed HDR imaging system that tightly integrates an SVE micro-attenuation camera with an event sensor in an asymmetric dual-modality configuration.
- Score: 18.832542461183742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High dynamic range (HDR) imaging under extreme illumination remains challenging for conventional cameras due to overexposure. Event cameras provide microsecond temporal resolution and high dynamic range, while spatially varying exposure (SVE) sensors offer single-shot radiometric diversity.We present a hardware--algorithm co-designed HDR imaging system that tightly integrates an SVE micro-attenuation camera with an event sensor in an asymmetric dual-modality configuration. To handle non-coaxial geometry and heterogeneous optics, we develop a two-stage cross-modal alignment framework that combines feature-guided coarse homography estimation with a multi-scale refinement module based on spatial pooling and frequency-domain filtering. On top of aligned representations, we develop a cross-modal HDR reconstruction network with convolutional fusion, mutual-information regularization, and a learnable fusion loss that adaptively balances intensity cues and event-derived structural constraints. Comprehensive experiments on both synthetic benchmarks and real captures demonstrate that the proposed system consistently improves highlight recovery, edge fidelity, and robustness compared with frame-only or event-only HDR pipelines. The results indicate that jointly optimizing optical design, cross-modal alignment, and computational fusion provides an effective foundation for reliable HDR perception in highly dynamic and radiometrically challenging environments.
Related papers
- Scale Equivariance Regularization and Feature Lifting in High Dynamic Range Modulo Imaging [19.49437461280304]
This work proposes a learning-based HDR restoration framework.<n>It incorporates two key strategies: (i) a scale-equivariant regularization that enforces consistency under exposure variations, and (ii) a feature lifting input design combining the raw modulo image, wrapped finite differences.
arXiv Detail & Related papers (2026-01-30T14:45:29Z) - HAD: Hierarchical Asymmetric Distillation to Bridge Spatio-Temporal Gaps in Event-Based Object Tracking [80.07224739976911]
Event cameras offer exceptional temporal resolution and a range (modal)<n> RGB cameras excel at capturing rich texture with high resolution, whereas event cameras offer exceptional temporal resolution and a range (modal)
arXiv Detail & Related papers (2025-10-22T13:15:13Z) - Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - Autoregressive High-Order Finite Difference Modulo Imaging: High-Dynamic Range for Computer Vision Applications [3.4956406636452626]
High dynamic range (gressive) imaging is vital for capturing the full range of light tones in scenes, essential for computer vision tasks such as autonomous driving.<n>Standard commercial imaging systems face limitations in capacity for well depth, and quantization precision, hindering their HDR capabilities.<n>We develop a modulo analog-to-digital approach that resets signals upon saturation, enabling estimation of pixel resets through neighboring pixel intensities.
arXiv Detail & Related papers (2025-04-05T16:41:15Z) - DehazeMamba: SAR-guided Optical Remote Sensing Image Dehazing with Adaptive State Space Model [27.83437788159158]
We introduce DehazeMamba, a novel SAR-guided dehazing network built on a progressive haze decoupling fusion strategy.<n>Our approach incorporates two key innovations: a Haze Perception and Decoupling Module (HPDM) that dynamically identifies haze-affected regions through optical-SAR difference analysis, and a Progressive Fusion Module (PFM) that mitigates domain shift through a two-stage fusion process based on feature quality assessment.<n>Extensive experiments demonstrate that DehazeMamba significantly outperforms state-of-the-art methods, achieving a 0.73 dB improvement in PSNR and substantial enhancements in downstream tasks such as
arXiv Detail & Related papers (2025-03-17T11:25:05Z) - Event-assisted 12-stop HDR Imaging of Dynamic Scene [20.064191181938533]
We propose a novel 12-stop HDR imaging approach for dynamic scenes, leveraging a dual-camera system with an event camera and an RGB camera.<n>The event camera provides temporally dense, high dynamic range signals that improve alignment between LDR frames with large exposure differences, reducing ghosting artifacts caused by motion.<n>Our method achieves state-of-the-art performance, successfully extending HDR imaging to 12 stops in dynamic scenes.
arXiv Detail & Related papers (2024-12-19T10:17:50Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Attention-Guided Progressive Neural Texture Fusion for High Dynamic
Range Image Restoration [48.02238732099032]
In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration model.
An efficient two-stream structure is proposed which separately focuses on texture feature transfer over saturated regions and multi-exposure tonal and texture feature fusion.
A progressive texture blending module is designed to blend the encoded two-stream features in a multi-scale and progressive manner.
arXiv Detail & Related papers (2021-07-13T16:07:00Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.