Neural Augmentation Based Panoramic High Dynamic Range Stitching
- URL: http://arxiv.org/abs/2409.04679v1
- Date: Sat, 7 Sep 2024 02:16:19 GMT
- Title: Neural Augmentation Based Panoramic High Dynamic Range Stitching
- Authors: Chaobing Zheng, Yilun Xu, Weihai Chen, Shiqian Wu, Zhengguo Li,
- Abstract summary: A novel neural augmentation based panoramic HDR stitching algorithm is proposed in this paper.
The proposed algorithm outperforms existing panoramic stitching algorithms.
Experimental results demonstrate the proposed algorithm outperforms existing panoramic stitching algorithms.
- Score: 30.47155955320407
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Due to saturated regions of inputting low dynamic range (LDR) images and large intensity changes among the LDR images caused by different exposures, it is challenging to produce an information enriched panoramic LDR image without visual artifacts for a high dynamic range (HDR) scene through stitching multiple geometrically synchronized LDR images with different exposures and pairwise overlapping fields of views (OFOVs). Fortunately, the stitching of such images is innately a perfect scenario for the fusion of a physics-driven approach and a data-driven approach due to their OFOVs. Based on this new insight, a novel neural augmentation based panoramic HDR stitching algorithm is proposed in this paper. The physics-driven approach is built up using the OFOVs. Different exposed images of each view are initially generated by using the physics-driven approach, are then refined by a data-driven approach, and are finally used to produce panoramic LDR images with different exposures. All the panoramic LDR images with different exposures are combined together via a multi-scale exposure fusion algorithm to produce the final panoramic LDR image. Experimental results demonstrate the proposed algorithm outperforms existing panoramic stitching algorithms.
Related papers
- Semantic Aware Diffusion Inverse Tone Mapping [5.65968650127342]
Inverse tone mapping attempts to boost captured Standard Dynamic Range (SDR) images back to High Dynamic Range ( HDR)
We present a novel inverse tone mapping approach for mapping SDR images to HDR that generates lost details in clipped regions through a semantic-aware diffusion based inpainting approach.
arXiv Detail & Related papers (2024-05-24T11:44:22Z) - Generating Content for HDR Deghosting from Frequency View [56.103761824603644]
Recent Diffusion Models (DMs) have been introduced in HDR imaging field.
DMs require extensive iterations with large models to estimate entire images.
We propose the Low-Frequency aware Diffusion (LF-Diff) model for ghost-free HDR imaging.
arXiv Detail & Related papers (2024-04-01T01:32:11Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Multi-Exposure HDR Composition by Gated Swin Transformer [8.619880437958525]
This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
arXiv Detail & Related papers (2023-03-15T15:38:43Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - StyleLight: HDR Panorama Generation for Lighting Estimation and Editing [98.20167223076756]
We present a new lighting estimation and editing framework to generate high-dynamic-range (GAN) indoor panorama lighting from a single field-of-view (LFOV) image.
Our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation.
arXiv Detail & Related papers (2022-07-29T17:58:58Z) - FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging [0.9185931275245008]
We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
arXiv Detail & Related papers (2022-01-07T14:27:17Z) - PAS-MEF: Multi-exposure image fusion based on principal component
analysis, adaptive well-exposedness and saliency map [0.0]
With regular low dynamic range (LDR) capture/display devices, significant details may not be preserved in images due to the huge dynamic range of natural scenes.
This study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method.
Experimental comparisons with existing techniques demonstrate that the proposed method produces very strong statistical and visual results.
arXiv Detail & Related papers (2021-05-25T10:22:43Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.