Scale-aware Two-stage High Dynamic Range Imaging
- URL: http://arxiv.org/abs/2303.06575v1
- Date: Sun, 12 Mar 2023 05:17:24 GMT
- Title: Scale-aware Two-stage High Dynamic Range Imaging
- Authors: Hui Li, Xuyang Yao, Wuyuan Xie, Miaohui Wang
- Abstract summary: We propose a scale-aware two-stage high range imaging framework (ST) to generate high-quality ghost-free image composition.
Specifically, our framework consists of feature alignment and two-stage fusion.
In the first stage of feature fusion, we obtain a preliminary result with little ghost artifacts.
In the second stage, we validate the effectiveness of the proposed ST in terms of speed and quality.
- Score: 13.587403084724015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep high dynamic range (HDR) imaging as an image translation issue has
achieved great performance without explicit optical flow alignment. However,
challenges remain over content association ambiguities especially caused by
saturation and large-scale movements. To address the ghosting issue and enhance
the details in saturated regions, we propose a scale-aware two-stage high
dynamic range imaging framework (STHDR) to generate high-quality ghost-free HDR
image. The scale-aware technique and two-stage fusion strategy can
progressively and effectively improve the HDR composition performance.
Specifically, our framework consists of feature alignment and two-stage fusion.
In feature alignment, we propose a spatial correct module (SCM) to better
exploit useful information among non-aligned features to avoid ghosting and
saturation. In the first stage of feature fusion, we obtain a preliminary
fusion result with little ghosting. In the second stage, we conflate the
results of the first stage with aligned features to further reduce residual
artifacts and thus improve the overall quality. Extensive experimental results
on the typical test dataset validate the effectiveness of the proposed STHDR in
terms of speed and quality.
Related papers
- Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment [91.38256332633544]
PASTA is a Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting.
Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement.
Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics.
arXiv Detail & Related papers (2024-03-15T15:05:29Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - High Dynamic Range Imaging of Dynamic Scenes with Saturation
Compensation but without Explicit Motion Compensation [20.911738532410766]
High dynamic range (LDR) imaging is a highly challenging task since a large amount of information is lost due to the limitations of camera sensors.
For HDR imaging, some methods capture multiple low dynamic range (LDR) images with altering exposures to aggregate more information.
Most existing methods focus on motion compensation to reduce the ghosting artifacts, but they still produce unsatisfying results.
arXiv Detail & Related papers (2023-08-22T02:44:03Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes [17.867412310873732]
Ghosting artifacts, motion blur, and low fidelity in highlight are the main challenges in High Dynamic Range (LDR) imaging.
We propose a joint HDR and denoising pipeline, containing two sub-networks.
We create the first joint HDR and denoising benchmark dataset.
arXiv Detail & Related papers (2022-06-20T07:49:56Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.