SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes
- URL: http://arxiv.org/abs/2206.09611v1
- Date: Mon, 20 Jun 2022 07:49:56 GMT
- Title: SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes
- Authors: Wei Li, Shuai Xiao, Tianhong Dai, Shanxin Yuan, Tao Wang, Cheng Li,
Fenglong Song
- Abstract summary: Ghosting artifacts, motion blur, and low fidelity in highlight are the main challenges in High Dynamic Range (LDR) imaging.
We propose a joint HDR and denoising pipeline, containing two sub-networks.
We create the first joint HDR and denoising benchmark dataset.
- Score: 17.867412310873732
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ghosting artifacts, motion blur, and low fidelity in highlight are the main
challenges in High Dynamic Range (HDR) imaging from multiple Low Dynamic Range
(LDR) images. These issues come from using the medium-exposed image as the
reference frame in previous methods. To deal with them, we propose to use the
under-exposed image as the reference to avoid these issues. However, the heavy
noise in dark regions of the under-exposed image becomes a new problem.
Therefore, we propose a joint HDR and denoising pipeline, containing two
sub-networks: (i) a pre-denoising network (PreDNNet) to adaptively denoise
input LDRs by exploiting exposure priors; (ii) a pyramid cascading fusion
network (PCFNet), introducing an attention mechanism and cascading structure in
a multi-scale manner. To further leverage these two paradigms, we propose a
selective and joint HDR and denoising (SJ-HD$^2$R) imaging framework, utilizing
scenario-specific priors to conduct the path selection with an accuracy of more
than 93.3$\%$. We create the first joint HDR and denoising benchmark dataset,
which contains a variety of challenging HDR and denoising scenes and supports
the switching of the reference image. Extensive experiment results show that
our method achieves superior performance to previous methods.
Related papers
- Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.
We present a novel Mamba scanning mechanism, called RAWMamba, to effectively handle raw images with different CFAs.
We also present a Retinex Decomposition Module (RDM) grounded in Retinex prior, which decouples illumination from reflectance to facilitate more effective denoising and automatic non-linear exposure correction.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - HDR Imaging with Spatially Varying Signal-to-Noise Ratios [15.525314212209564]
For low-light HDR imaging, the noise within one exposure is spatially varying.
Existing image denoising algorithms and HDR fusion algorithms both fail to handle this situation.
We propose a new method called the spatially varying high dynamic range (SV-) fusion network to simultaneously denoise and fuse images.
arXiv Detail & Related papers (2023-03-30T09:32:29Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization [39.82945546614887]
We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
arXiv Detail & Related papers (2021-05-27T12:12:34Z) - ADNet: Attention-guided Deformable Convolutional Network for High
Dynamic Range Imaging [21.237888314569815]
We present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range ( HDR) imaging, namely ADNet.
This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering.
The proposed ADNet shows state-of-the-art performance compared with previous methods, achieving a PSNR-$l$ of 39.4471 and a PSNR-$mu$ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge.
arXiv Detail & Related papers (2021-05-22T11:37:09Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.