Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging
- URL: http://arxiv.org/abs/2208.02448v2
- Date: Mon, 29 May 2023 07:28:46 GMT
- Title: Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging
- Authors: Jun Xiao, Qian Ye, Tianshan Liu, Cong Zhang, Kin-Man Lam
- Abstract summary: We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
- Score: 24.94466716276423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High dynamic range (HDR) imaging is an important task in image processing
that aims to generate well-exposed images in scenes with varying illumination.
Although existing multi-exposure fusion methods have achieved impressive
results, generating high-quality HDR images in dynamic scenes is still
difficult. The primary challenges are ghosting artifacts caused by object
motion between low dynamic range images and distorted content in under and
overexposed regions. In this paper, we propose a deep progressive feature
aggregation network for improving HDR imaging quality in dynamic scenes. To
address the issues of object motion, our method implicitly samples
high-correspondence features and aggregates them in a coarse-to-fine manner for
alignment. In addition, our method adopts a densely connected network structure
based on the discrete wavelet transform, which aims to decompose the input
features into multiple frequency subbands and adaptively restore corrupted
contents. Experiments show that our proposed method can achieve
state-of-the-art performance under different scenes, compared to other
promising HDR imaging methods. Specifically, the HDR images generated by our
method contain cleaner and more detailed content, with fewer distortions,
leading to better visual quality.
Related papers
- Intrinsic Single-Image HDR Reconstruction [0.6554326244334868]
We introduce a physically-inspired remodeling of the HDR reconstruction problem in the intrinsic domain.
We show that dividing the problem into two simpler sub-tasks improves performance in a wide variety of photographs.
arXiv Detail & Related papers (2024-09-20T17:56:51Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - High Dynamic Range Imaging of Dynamic Scenes with Saturation
Compensation but without Explicit Motion Compensation [20.911738532410766]
High dynamic range (LDR) imaging is a highly challenging task since a large amount of information is lost due to the limitations of camera sensors.
For HDR imaging, some methods capture multiple low dynamic range (LDR) images with altering exposures to aggregate more information.
Most existing methods focus on motion compensation to reduce the ghosting artifacts, but they still produce unsatisfying results.
arXiv Detail & Related papers (2023-08-22T02:44:03Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - A Unified HDR Imaging Method with Pixel and Patch Level [41.14378863436963]
We propose a hybrid HDR deghosting network, called HyNet, to generate visually pleasing HDR images.
Experiments demonstrate that HyNet outperforms state-of-the-art methods both quantitatively and qualitatively, achieving appealing HDR visualization with unified textures and colors.
arXiv Detail & Related papers (2023-04-14T06:21:57Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Learning Regularized Multi-Scale Feature Flow for High Dynamic Range
Imaging [29.691689596845112]
We propose a deep network that tries to learn multi-scale feature flow guided by the regularized loss.
It first extracts multi-scale features and then aligns features from non-reference images.
After alignment, we use residual channel attention blocks to merge the features from different images.
arXiv Detail & Related papers (2022-07-06T09:37:28Z) - HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization [39.82945546614887]
We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
arXiv Detail & Related papers (2021-05-27T12:12:34Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.