LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video
Reconstruction
- URL: http://arxiv.org/abs/2308.11116v1
- Date: Tue, 22 Aug 2023 01:43:00 GMT
- Title: LAN-HDR: Luminance-based Alignment Network for High Dynamic Range Video
Reconstruction
- Authors: Haesoo Chung and Nam Ik Cho
- Abstract summary: We propose an end-to-end HDR video composition framework, which aligns LDR frames in feature space and then merges aligned features into an HDR frame.
In training, we adopt a temporal loss, in addition to frame reconstruction losses, to enhance temporal consistency and thus reduce flickering.
- Score: 20.911738532410766
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As demands for high-quality videos continue to rise, high-resolution and
high-dynamic range (HDR) imaging techniques are drawing attention. To generate
an HDR video from low dynamic range (LDR) images, one of the critical steps is
the motion compensation between LDR frames, for which most existing works
employed the optical flow algorithm. However, these methods suffer from flow
estimation errors when saturation or complicated motions exist. In this paper,
we propose an end-to-end HDR video composition framework, which aligns LDR
frames in the feature space and then merges aligned features into an HDR frame,
without relying on pixel-domain optical flow. Specifically, we propose a
luminance-based alignment network for HDR (LAN-HDR) consisting of an alignment
module and a hallucination module. The alignment module aligns a frame to the
adjacent reference by evaluating luminance-based attention, excluding color
information. The hallucination module generates sharp details, especially for
washed-out areas due to saturation. The aligned and hallucinated features are
then blended adaptively to complement each other. Finally, we merge the
features to generate a final HDR frame. In training, we adopt a temporal loss,
in addition to frame reconstruction losses, to enhance temporal consistency and
thus reduce flickering. Extensive experiments demonstrate that our method
performs better or comparable to state-of-the-art methods on several
benchmarks.
Related papers
- Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering [17.430726543786943]
We propose a novel paradigm to render HDR frames via completing the absent exposure information.
Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures.
This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency.
arXiv Detail & Related papers (2024-07-18T09:13:08Z) - Diffusion-Promoted HDR Video Reconstruction [45.73396977607666]
High dynamic range (LDR) video reconstruction aims to generate HDR videos from low dynamic range (LDR) frames captured with alternating exposures.
Most existing works solely rely on the regression-based paradigm, leading to adverse effects such as ghosting artifacts and missing details in saturated regions.
We propose a diffusion-promoted method for HDR video reconstruction, termed HDR-V-Diff, which incorporates a diffusion model to capture the HDR distribution.
arXiv Detail & Related papers (2024-06-12T13:38:10Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - DeepHS-HDRVideo: Deep High Speed High Dynamic Range Video Reconstruction [23.341594337637545]
We propose to align the input LDR frames using a pre-trained video frame network.
This results in better alignment of LDR frames, since we circumvent the error-prone exposure matching step.
We also present the first method to generate high FPS HDR videos.
arXiv Detail & Related papers (2022-10-10T04:27:45Z) - StyleLight: HDR Panorama Generation for Lighting Estimation and Editing [98.20167223076756]
We present a new lighting estimation and editing framework to generate high-dynamic-range (GAN) indoor panorama lighting from a single field-of-view (LFOV) image.
Our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation.
arXiv Detail & Related papers (2022-07-29T17:58:58Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - HDRVideo-GAN: Deep Generative HDR Video Reconstruction [19.837271879354184]
We propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures.
We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting.
We then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting.
arXiv Detail & Related papers (2021-10-22T14:02:03Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.