A Dual-branch Network for Infrared and Visible Image Fusion
- URL: http://arxiv.org/abs/2101.09643v1
- Date: Sun, 24 Jan 2021 04:18:32 GMT
- Title: A Dual-branch Network for Infrared and Visible Image Fusion
- Authors: Yu Fu, Xiao-Jun Wu
- Abstract summary: We propose a new method based on dense blocks and GANs.
We directly insert the input image-visible light image in each layer of the entire network.
Our experiments show that the fused images obtained by our approach achieve good score based on multiple evaluation indicators.
- Score: 20.15854042473049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning is a rapidly developing approach in the field of infrared and
visible image fusion. In this context, the use of dense blocks in deep networks
significantly improves the utilization of shallow information, and the
combination of the Generative Adversarial Network (GAN) also improves the
fusion performance of two source images. We propose a new method based on dense
blocks and GANs , and we directly insert the input image-visible light image in
each layer of the entire network. We use SSIM and gradient loss functions that
are more consistent with perception instead of mean square error loss. After
the adversarial training between the generator and the discriminator, we show
that a trained end-to-end fusion network -- the generator network -- is finally
obtained. Our experiments show that the fused images obtained by our approach
achieve good score based on multiple evaluation indicators. Further, our fused
images have better visual effects in multiple sets of contrasts, which are more
satisfying to human visual perception.
Related papers
- GAN-HA: A generative adversarial network with a novel heterogeneous dual-discriminator network and a new attention-based fusion strategy for infrared and visible image fusion [0.1160897408844138]
Infrared and visible image fusion (IVIF) aims to preserve thermal radiation information from infrared images while integrating texture details from visible images.
Existing dual-discriminator generative adversarial networks (GANs) often rely on two structurally identical discriminators for learning.
This paper proposes a novel GAN with a heterogeneous dual-discriminator network and an attention-based fusion strategy.
arXiv Detail & Related papers (2024-04-24T17:06:52Z) - An Interactively Reinforced Paradigm for Joint Infrared-Visible Image
Fusion and Saliency Object Detection [59.02821429555375]
This research focuses on the discovery and localization of hidden objects in the wild and serves unmanned systems.
Through empirical analysis, infrared and visible image fusion (IVIF) enables hard-to-find objects apparent.
multimodal salient object detection (SOD) accurately delineates the precise spatial location of objects within the picture.
arXiv Detail & Related papers (2023-05-17T06:48:35Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Infrared and Visible Image Fusion via Interactive Compensatory Attention
Adversarial Learning [7.995162257955025]
We propose a novel end-to-end mode based on generative adversarial training to achieve better fusion balance.
In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and adopt infrared and visible paths to provide additional intensity and information gradient.
In addition, dual discriminators are designed to identify the similar distribution between fused result and source images, and the generator is optimized to produce a more balanced result.
arXiv Detail & Related papers (2022-03-29T08:28:14Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - FuseVis: Interpreting neural networks for image fusion using per-pixel
saliency visualization [10.156766309614113]
Unsupervised learning based convolutional neural networks (CNNs) have been utilized for different types of image fusion tasks.
It is challenging to analyze the reliability of these CNNs for the image fusion tasks since no groundtruth is available.
We present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps.
arXiv Detail & Related papers (2020-12-06T10:03:02Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.