Boosting HDR Image Reconstruction via Semantic Knowledge Transfer
- URL: http://arxiv.org/abs/2503.15361v1
- Date: Wed, 19 Mar 2025 16:01:27 GMT
- Title: Boosting HDR Image Reconstruction via Semantic Knowledge Transfer
- Authors: Qingsen Yan, Tao Hu, Genggeng Chen, Wei Dong, Yanning Zhang,
- Abstract summary: Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions.<n>These priors are typically extracted from sRGB Standard Dynamic Range (SDR) images.<n>We propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction.
- Score: 45.738735520776004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering High Dynamic Range (HDR) images from multiple Low Dynamic Range (LDR) images becomes challenging when the LDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB Standard Dynamic Range (SDR) images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a semantic knowledge alignment module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our method can significantly improve the HDR imaging quality of existing methods.
Related papers
- Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Efficient HDR Reconstruction from Real-World Raw Images [16.54071503000866]
High-definition screens on edge devices stimulate a strong demand for efficient high dynamic range ( HDR) algorithms.
Many existing HDR methods either deliver unsatisfactory results or consume too much computational and memory resources.
In this work, we discover an excellent opportunity for HDR reconstructing directly from raw images and investigating novel neural network structures.
arXiv Detail & Related papers (2023-06-17T10:10:15Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - HDR-cGAN: Single LDR to HDR Image Translation using Conditional GAN [24.299931323012757]
Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene.
We propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image.
We present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets.
arXiv Detail & Related papers (2021-10-04T18:50:35Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.