Multi-Exposure HDR Composition by Gated Swin Transformer
- URL: http://arxiv.org/abs/2303.08704v1
- Date: Wed, 15 Mar 2023 15:38:43 GMT
- Title: Multi-Exposure HDR Composition by Gated Swin Transformer
- Authors: Rui Zhou, Yan Niu
- Abstract summary: This paper provides a novel multi-exposure fusion model based on Swin Transformer.
We exploit the long distance contextual dependency in the exposure-space pyramid by the self-attention mechanism.
Experiments show that our model achieves the accuracy on par with current top performing multi-exposure HDR imaging models.
- Score: 8.619880437958525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fusing a sequence of perfectly aligned images captured at various exposures,
has shown great potential to approach High Dynamic Range (HDR) imaging by
sensors with limited dynamic range. However, in the presence of large motion of
scene objects or the camera, mis-alignment is almost inevitable and leads to
the notorious ``ghost'' artifacts. Besides, factors such as the noise in the
dark region or color saturation in the over-bright region may also fail to fill
local image details to the HDR image. This paper provides a novel
multi-exposure fusion model based on Swin Transformer. Particularly, we design
feature selection gates, which are integrated with the feature extraction
layers to detect outliers and block them from HDR image synthesis. To
reconstruct the missing local details by well-aligned and properly-exposed
regions, we exploit the long distance contextual dependency in the
exposure-space pyramid by the self-attention mechanism. Extensive numerical and
visual evaluation has been conducted on a variety of benchmark datasets. The
experiments show that our model achieves the accuracy on par with current top
performing multi-exposure HDR imaging models, while gaining higher efficiency.
Related papers
- HDRT: Infrared Capture for HDR Imaging [8.208995723545502]
We propose a new approach, High Dynamic Range Thermal (HDRT), for HDR acquisition using a separate, commonly available, thermal infrared (IR) sensor.
We propose a novel deep neural method (HDRTNet) which combines IR and SDR content to generate HDR images.
We show substantial quantitative and qualitative quality improvements on both over- and under-exposed images, showing that our approach is robust to capturing in multiple different lighting conditions.
arXiv Detail & Related papers (2024-06-08T13:43:44Z) - Semantic Aware Diffusion Inverse Tone Mapping [5.65968650127342]
Inverse tone mapping attempts to boost captured Standard Dynamic Range (SDR) images back to High Dynamic Range ( HDR)
We present a novel inverse tone mapping approach for mapping SDR images to HDR that generates lost details in clipped regions through a semantic-aware diffusion based inpainting approach.
arXiv Detail & Related papers (2024-05-24T11:44:22Z) - Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - HDR Reconstruction from Bracketed Exposures and Events [12.565039752529797]
Reconstruction of high-quality HDR images is at the core of modern computational photography.
We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event in the feature domain.
Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window.
arXiv Detail & Related papers (2022-03-28T15:04:41Z) - FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR
Imaging [0.9185931275245008]
We present a new HDR imaging technique that models alignment and exposure uncertainties to produce high quality HDR results.
We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map.
Experimental results show our method can produce better quality HDR images with up to 0.8dB PSNR improvement to the state-of-the-art.
arXiv Detail & Related papers (2022-01-07T14:27:17Z) - HDR-NeRF: High Dynamic Range Neural Radiance Fields [70.80920996881113]
We present High Dynamic Range Neural Radiance Fields (-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
We are able to generate both novel HDR views and novel LDR views under different exposures.
arXiv Detail & Related papers (2021-11-29T11:06:39Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.