A Unified HDR Imaging Method with Pixel and Patch Level
- URL: http://arxiv.org/abs/2304.06943v2
- Date: Mon, 17 Apr 2023 01:38:17 GMT
- Title: A Unified HDR Imaging Method with Pixel and Patch Level
- Authors: Qingsen Yan, Weiye Chen, Song Zhang, Yu Zhu, Jinqiu Sun, Yanning Zhang
- Abstract summary: We propose a hybrid HDR deghosting network, called HyNet, to generate visually pleasing HDR images.
Experiments demonstrate that HyNet outperforms state-of-the-art methods both quantitatively and qualitatively, achieving appealing HDR visualization with unified textures and colors.
- Score: 41.14378863436963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mapping Low Dynamic Range (LDR) images with different exposures to High
Dynamic Range (HDR) remains nontrivial and challenging on dynamic scenes due to
ghosting caused by object motion or camera jitting. With the success of Deep
Neural Networks (DNNs), several DNNs-based methods have been proposed to
alleviate ghosting, they cannot generate approving results when motion and
saturation occur. To generate visually pleasing HDR images in various cases, we
propose a hybrid HDR deghosting network, called HyHDRNet, to learn the
complicated relationship between reference and non-reference images. The
proposed HyHDRNet consists of a content alignment subnetwork and a
Transformer-based fusion subnetwork. Specifically, to effectively avoid
ghosting from the source, the content alignment subnetwork uses patch
aggregation and ghost attention to integrate similar content from other
non-reference images with patch level and suppress undesired components with
pixel level. To achieve mutual guidance between patch-level and pixel-level, we
leverage a gating module to sufficiently swap useful information both in
ghosted and saturated regions. Furthermore, to obtain a high-quality HDR image,
the Transformer-based fusion subnetwork uses a Residual Deformable Transformer
Block (RDTB) to adaptively merge information for different exposed regions. We
examined the proposed method on four widely used public HDR image deghosting
datasets. Experiments demonstrate that HyHDRNet outperforms state-of-the-art
methods both quantitatively and qualitatively, achieving appealing HDR
visualization with unified textures and colors.
Related papers
- Towards High-quality HDR Deghosting with Conditional Diffusion Models [88.83729417524823]
High Dynamic Range (LDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques.
DNNs still generate ghosting artifacts when LDR images have saturation and large motion.
We formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition.
arXiv Detail & Related papers (2023-11-02T01:53:55Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - SMAE: Few-shot Learning for HDR Deghosting with Saturation-Aware Masked
Autoencoders [97.64072440883392]
We propose a novel semi-supervised approach to realize few-shot HDR imaging via two stages of training, called SSHDR.
Unlikely previous methods, directly recovering content and removing ghosts simultaneously, which is hard to achieve optimum.
Experiments demonstrate that SSHDR outperforms state-of-the-art methods quantitatively and qualitatively within and across different datasets.
arXiv Detail & Related papers (2023-04-14T03:42:51Z) - High Dynamic Range Imaging with Context-aware Transformer [3.1892103878735454]
We propose a novel hierarchical dual Transformer (HDT) method for ghost-free images.
First, we use a CNN-based head with spatial attention mechanisms to extract features from all the LDR images.
Second, the LDR features are delivered to the Transformer, while the local details are extracted using the channel attention mechanism.
arXiv Detail & Related papers (2023-04-10T06:56:01Z) - Ghost-free High Dynamic Range Imaging via Hybrid CNN-Transformer and
Structure Tensor [12.167049432063132]
We present a hybrid model consisting of a convolutional encoder and a Transformer decoder to generate ghost-free HDR images.
In the encoder, a context aggregation network and non-local attention block are adopted to optimize multi-scale features.
The decoder based on Swin Transformer is utilized to improve the reconstruction capability of the proposed model.
arXiv Detail & Related papers (2022-12-01T15:43:32Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - Segmentation Guided Deep HDR Deghosting [47.1023337218752]
We present a motion segmentation guided convolutional neural network (CNN) approach for high dynamic range () image deghosting.
First, we segment the moving regions in the input sequence using a CNN. Then, we merge static and moving regions separately with different fusion networks to generate the final ghost-free HDR image.
arXiv Detail & Related papers (2022-07-04T06:49:27Z) - HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization [39.82945546614887]
We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
arXiv Detail & Related papers (2021-05-27T12:12:34Z) - UPHDR-GAN: Generative Adversarial Network for High Dynamic Range Imaging
with Unpaired Data [42.283022888414656]
The paper proposes a method to effectively fuse multiexposure inputs and generates high-quality high dynamic range (versa) images with unpaired datasets.
Deep learning-based HDR image generation methods rely heavily on paired datasets.
Generative Adrial Networks (GAN) have demonstrated their potentials of translating images from source domain X to target domain Y in the absence of paired examples.
arXiv Detail & Related papers (2021-02-03T03:09:14Z) - HDR-GAN: HDR Image Reconstruction from Multi-Exposed LDR Images with
Large Motions [62.44802076971331]
We propose a novel GAN-based model, HDR-GAN, for synthesizing HDR images from multi-exposed LDR images.
By incorporating adversarial learning, our method is able to produce faithful information in the regions with missing content.
arXiv Detail & Related papers (2020-07-03T11:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.