Bayesian Fusion for Infrared and Visible Images
- URL: http://arxiv.org/abs/2005.05839v1
- Date: Tue, 12 May 2020 14:57:19 GMT
- Title: Bayesian Fusion for Infrared and Visible Images
- Authors: Zixiang Zhao, Shuang Xu, Chunxia Zhang, Junmin Liu, Jiangshe Zhang
- Abstract summary: In this paper, a novel Bayesian fusion model is established for infrared and visible images.
We aim at making the fused image satisfy human visual system.
Compared with the previous methods, the novel model can generate better fused images with high-light targets and rich texture details.
- Score: 26.64101343489016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion has been a hot issue in image fusion. In
this task, a fused image containing both the gradient and detailed texture
information of visible images as well as the thermal radiation and highlighting
targets of infrared images is expected to be obtained. In this paper, a novel
Bayesian fusion model is established for infrared and visible images. In our
model, the image fusion task is cast into a regression problem. To measure the
variable uncertainty, we formulate the model in a hierarchical Bayesian manner.
Aiming at making the fused image satisfy human visual system, the model
incorporates the total-variation(TV) penalty. Subsequently, the model is
efficiently inferred by the expectation-maximization(EM) algorithm. We test our
algorithm on TNO and NIR image fusion datasets with several state-of-the-art
approaches. Compared with the previous methods, the novel model can generate
better fused images with high-light targets and rich texture details, which can
improve the reliability of the target automatic detection and recognition
system.
Related papers
- CoMoFusion: Fast and High-quality Fusion of Infrared and Visible Image with Consistency Model [20.02742423120295]
Current generative models based fusion methods often suffer from unstable training and slow inference speed.
CoMoFusion can generate the high-quality images and achieve fast image inference speed.
In order to enhance the texture and salient information of fused images, a novel loss based on pixel value selection is also designed.
arXiv Detail & Related papers (2024-05-31T12:35:06Z) - Equivariant Multi-Modality Image Fusion [124.11300001864579]
We propose the Equivariant Multi-Modality imAge fusion paradigm for end-to-end self-supervised learning.
Our approach is rooted in the prior knowledge that natural imaging responses are equivariant to certain transformations.
Experiments confirm that EMMA yields high-quality fusion results for infrared-visible and medical images.
arXiv Detail & Related papers (2023-05-19T05:50:24Z) - Pedestrain detection for low-light vision proposal [0.0]
The demand for pedestrian detection has created a challenging problem for various visual tasks such as image fusion.
In our project, we would approach by preprocessing our dataset with image fusion technique, then using Vision Transformer model to detect pedestrians from the fused images.
arXiv Detail & Related papers (2023-03-17T04:13:58Z) - DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion [144.9653045465908]
We propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM)
Our approach yields promising fusion results in infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2023-03-13T04:06:42Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - A Deep Decomposition Network for Image Processing: A Case Study for
Visible and Infrared Image Fusion [38.17268441062239]
We propose a new image decomposition method based on convolutional neural network.
We input infrared image and visible light image and decompose them into three high-frequency feature images and a low-frequency feature image respectively.
The two sets of feature images are fused using a specific fusion strategy to obtain fusion feature images.
arXiv Detail & Related papers (2021-02-21T06:34:33Z) - Learning Selective Mutual Attention and Contrast for RGB-D Saliency
Detection [145.4919781325014]
How to effectively fuse cross-modal information is the key problem for RGB-D salient object detection.
Many models use the feature fusion strategy but are limited by the low-order point-to-point fusion methods.
We propose a novel mutual attention model by fusing attention and contexts from different modalities.
arXiv Detail & Related papers (2020-10-12T08:50:10Z) - Efficient and Model-Based Infrared and Visible Image Fusion Via
Algorithm Unrolling [24.83209572888164]
Infrared and visible image fusion (IVIF) expects to obtain images that retain thermal radiation information from infrared images and texture details from visible images.
A model-based convolutional neural network (CNN) model is proposed to overcome the shortcomings of traditional CNN-based IVIF models.
arXiv Detail & Related papers (2020-05-12T16:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.