Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection
- URL: http://arxiv.org/abs/2308.15462v1
- Date: Tue, 29 Aug 2023 17:40:57 GMT
- Title: Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection
- Authors: Yazhou Xing, Amrita Mazumdar, Anjul Patney, Chao Liu, Hongxu Yin,
Qifeng Chen, Jan Kautz, Iuri Frosio
- Abstract summary: Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
- Score: 90.35085487641773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs,
frequently leading to local overexposure issues. We present a learning-based
system to reduce these artifacts without resorting to complex acquisition
mechanisms like alternating exposures or costly processing that are typical of
high dynamic range (HDR) imaging. We propose a transformer-based deep neural
network (DNN) to infer the missing HDR details. In an ablation study, we show
the importance of using a multiscale DNN and train it with the proper cost
function to achieve state-of-the-art quality. To aid the reconstruction of the
overexposed areas, our DNN takes a reference frame from the past as an
additional input. This leverages the commonly occurring temporal instabilities
of autoexposure to our advantage: since well-exposed details in the current
frame may be overexposed in the future, we use reinforcement learning to train
a reference frame selection DNN that decides whether to adopt the current frame
as a future reference. Without resorting to alternating exposures, we obtain
therefore a causal, HDR hallucination algorithm with potential application in
common video acquisition settings. Our demo video can be found at
https://drive.google.com/file/d/1-r12BKImLOYCLUoPzdebnMyNjJ4Rk360/view
Related papers
- Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Self-Supervised Scene Dynamic Recovery from Rolling Shutter Images and
Events [63.984927609545856]
Event-based Inter/intra-frame Compensator (E-IC) is proposed to predict the per-pixel dynamic between arbitrary time intervals.
We show that the proposed method achieves state-of-the-art and shows remarkable performance for event-based RS2GS inversion in real-world scenarios.
arXiv Detail & Related papers (2023-04-14T05:30:02Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - HDR Imaging with Spatially Varying Signal-to-Noise Ratios [15.525314212209564]
For low-light HDR imaging, the noise within one exposure is spatially varying.
Existing image denoising algorithms and HDR fusion algorithms both fail to handle this situation.
We propose a new method called the spatially varying high dynamic range (SV-) fusion network to simultaneously denoise and fuse images.
arXiv Detail & Related papers (2023-03-30T09:32:29Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z) - HDRVideo-GAN: Deep Generative HDR Video Reconstruction [19.837271879354184]
We propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures.
We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting.
We then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting.
arXiv Detail & Related papers (2021-10-22T14:02:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.