Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes
- URL: http://arxiv.org/abs/2310.01840v2
- Date: Wed, 28 Feb 2024 18:45:20 GMT
- Title: Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes
- Authors: Zhilu Zhang, Haoyu Wang, Shuai Liu, Xiaotao Wang, Lei Lei, Wangmeng
Zuo
- Abstract summary: Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
- Score: 58.66427721308464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Merging multi-exposure images is a common approach for obtaining high dynamic
range (HDR) images, with the primary challenge being the avoidance of ghosting
artifacts in dynamic scenes. Recent methods have proposed using deep neural
networks for deghosting. However, the methods typically rely on sufficient data
with HDR ground-truths, which are difficult and costly to collect. In this
work, to eliminate the need for labeled data, we propose SelfHDR, a
self-supervised HDR reconstruction method that only requires dynamic
multi-exposure images during training. Specifically, SelfHDR learns a
reconstruction network under the supervision of two complementary components,
which can be constructed from multi-exposure images and focus on HDR color as
well as structure, respectively. The color component is estimated from aligned
multi-exposure images, while the structure one is generated through a
structure-focused network that is supervised by the color component and an
input reference (\eg, medium-exposure) image. During testing, the learned
reconstruction network is directly deployed to predict an HDR image.
Experiments on real-world images demonstrate our SelfHDR achieves superior
results against the state-of-the-art self-supervised methods, and comparable
performance to supervised ones. Codes are available at
https://github.com/cszhilu1998/SelfHDR
Related papers
- HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Single-Image HDR Reconstruction by Multi-Exposure Generation [8.656080193351581]
We propose a weakly supervised learning method that inverts the physical image formation process for HDR reconstruction.
Our neural network can invert the camera response to reconstruct pixel irradiance before synthesizing multiple exposures.
Our experiments show that our proposed model can effectively reconstruct HDR images.
arXiv Detail & Related papers (2022-10-28T05:12:56Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - Beyond Visual Attractiveness: Physically Plausible Single Image HDR
Reconstruction for Spherical Panoramas [60.24132321381606]
We introduce the physical illuminance constraints to our single-shot HDR reconstruction framework.
Our method can generate HDRs which are not only visually appealing but also physically plausible.
arXiv Detail & Related papers (2021-03-24T01:51:19Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - End-to-End Differentiable Learning to HDR Image Synthesis for
Multi-exposure Images [23.895981099137533]
High dynamic range () image reconstruction based on the multiple exposure stack from a given single exposure utilizes a deep learning framework to generate high-quality HDR images.
We tackle the problem in stack reconstruction-based methods by proposing a novel framework with a fully differentiable high dynamic range imaging (I) process.
In other words, our differentiable HDR synthesis layer helps the deep neural network to train to create multi-exposure stacks while reflecting the precise correlations between multi-exposure images in the HDRI process.
arXiv Detail & Related papers (2020-06-29T06:47:07Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.