HDR Reconstruction from Bracketed Exposures and Events
- URL: http://arxiv.org/abs/2203.14825v1
- Date: Mon, 28 Mar 2022 15:04:41 GMT
- Title: HDR Reconstruction from Bracketed Exposures and Events
- Authors: Richard Shaw, Sibi Catley-Chandar, Ales Leonardis, Eduardo
Perez-Pellitero
- Abstract summary: Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
- Score: 12.565039752529797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstruction of high-quality HDR images is at the core of modern
computational photography. Significant progress has been made with multi-frame
HDR reconstruction methods, producing high-resolution, rich and accurate color
reconstructions with high-frequency details. However, they are still prone to
fail in dynamic or largely over-exposed scenes, where frame misalignment often
results in visible ghosting artifacts. Recent approaches attempt to alleviate
this by utilizing an event-based camera (EBC), which measures only binary
changes of illuminations. Despite their desirable high temporal resolution and
dynamic range characteristics, such approaches have not outperformed
traditional multi-frame reconstruction methods, mainly due to the lack of color
information and low-resolution sensors. In this paper, we propose to leverage
both bracketed LDR images and simultaneously captured events to obtain the best
of both worlds: high-quality RGB information from bracketed LDRs and
complementary high frequency and dynamic range information from events. We
present a multi-modal end-to-end learning-based HDR imaging system that fuses
bracketed images and event modalities in the feature domain using attention and
multi-scale spatial alignment modules. We propose a novel event-to-image
feature distillation module that learns to translate event features into the
image-feature space with self-supervision. Our framework exploits the higher
temporal resolution of events by sub-sampling the input event streams using a
sliding window, enriching our combined feature representation. Our proposed
approach surpasses SoTA multi-frame HDR reconstruction methods using synthetic
and real events, with a 2dB and 1dB improvement in PSNR-L and PSNR-mu on the
HdM HDR dataset, respectively.
Related papers
- HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video
Reconstruction [20.911738532410766]
We propose an end-to-end HDR video composition framework, which aligns LDR frames in feature space and then merges aligned features into an HDR frame.
In training, we adopt a temporal loss, in addition to frame reconstruction losses, to enhance temporal consistency and thus reduce flickering.
arXiv Detail & Related papers (2023-08-22T01:43:00Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging [0.9185931275245008]
We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
arXiv Detail & Related papers (2022-01-07T14:27:17Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.