Deep Joint Demosaicing and High Dynamic Range Imaging within a Single
Shot
- URL: http://arxiv.org/abs/2111.07281v1
- Date: Sun, 14 Nov 2021 08:54:26 GMT
- Title: Deep Joint Demosaicing and High Dynamic Range Imaging within a Single
Shot
- Authors: Yilun Xu, Ziyang Liu, Xingming Wu, Weihai Chen, Changyun Wen and
Zhengguo Li
- Abstract summary: It is challenging to restore a full-resolution HDR image from a real-world image with SVE.
A spatially varying convolution (SVC) is designed to process the Bayer images carried with varying exposures.
An exposure-guidance method is proposed against the interference from over- and under-exposed pixels.
- Score: 30.483754080108444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatially varying exposure (SVE) is a promising choice for high-dynamic-range
(HDR) imaging (HDRI). The SVE-based HDRI, which is called single-shot HDRI, is
an efficient solution to avoid ghosting artifacts. However, it is very
challenging to restore a full-resolution HDR image from a real-world image with
SVE because: a) only one-third of pixels with varying exposures are captured by
camera in a Bayer pattern, b) some of the captured pixels are over- and
under-exposed. For the former challenge, a spatially varying convolution (SVC)
is designed to process the Bayer images carried with varying exposures. For the
latter one, an exposure-guidance method is proposed against the interference
from over- and under-exposed pixels. Finally, a joint demosaicing and HDRI deep
learning framework is formalized to include the two novel components and to
realize an end-to-end single-shot HDRI. Experiments indicate that the proposed
end-to-end framework avoids the problem of cumulative errors and surpasses the
related state-of-the-art methods.
Related papers
- High Dynamic Range Novel View Synthesis with Single Exposure [43.50001955428593]
High Dynamic Range Novel View Synthesis (NV-NVS) aims to establish a 3D scene HDR model from Low Dynamic Range (LDR) imagery.<n>For the first time, single exposure LDR images are available during training.
arXiv Detail & Related papers (2025-05-02T12:04:38Z) - HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - HDR Imaging with Spatially Varying Signal-to-Noise Ratios [15.525314212209564]
For low-light HDR imaging, the noise within one exposure is spatially varying.
Existing image denoising algorithms and HDR fusion algorithms both fail to handle this situation.
We propose a new method called the spatially varying high dynamic range (SV-) fusion network to simultaneously denoise and fuse images.
arXiv Detail & Related papers (2023-03-30T09:32:29Z) - SJ-HD^2R: Selective Joint High Dynamic Range and Denoising Imaging for
Dynamic Scenes [17.867412310873732]
Ghosting artifacts, motion blur, and low fidelity in highlight are the main challenges in High Dynamic Range (LDR) imaging.
We propose a joint HDR and denoising pipeline, containing two sub-networks.
We create the first joint HDR and denoising benchmark dataset.
arXiv Detail & Related papers (2022-06-20T07:49:56Z) - FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging [0.9185931275245008]
We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
arXiv Detail & Related papers (2022-01-07T14:27:17Z) - A Two-stage Deep Network for High Dynamic Range Image Reconstruction [0.883717274344425]
This study tackles the challenges of single-shot LDR to HDR mapping by proposing a novel two-stage deep network.
Notably, our proposed method aims to reconstruct an HDR image without knowing hardware information, including camera response function (CRF) and exposure settings.
arXiv Detail & Related papers (2021-04-19T15:19:17Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.