PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation
- URL: http://arxiv.org/abs/2308.03979v1
- Date: Tue, 8 Aug 2023 01:55:44 GMT
- Title: PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation
- Authors: Zhu Liu, Jinyuan Liu, Benzhuang Zhang, Long Ma, Xin Fan, Risheng Liu
- Abstract summary: We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
- Score: 50.556961575275345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infrared and visible image fusion is a powerful technique that combines
complementary information from different modalities for downstream semantic
perception tasks. Existing learning-based methods show remarkable performance,
but are suffering from the inherent vulnerability of adversarial attacks,
causing a significant decrease in accuracy. In this work, a perception-aware
fusion framework is proposed to promote segmentation robustness in adversarial
scenes. We first conduct systematic analyses about the components of image
fusion, investigating the correlation with segmentation robustness under
adversarial perturbations. Based on these analyses, we propose a harmonized
architecture search with a decomposition-based structure to balance standard
accuracy and robustness. We also propose an adaptive learning strategy to
improve the parameter robustness of image fusion, which can learn effective
feature extraction under diverse adversarial perturbations. Thus, the goals of
image fusion (\textit{i.e.,} extracting complementary features from source
modalities and defending attack) can be realized from the perspectives of
architectural and learning strategies. Extensive experimental results
demonstrate that our scheme substantially enhances the robustness, with gains
of 15.3% mIOU of segmentation in the adversarial scene, compared with advanced
competitors. The source codes are available at
https://github.com/LiuZhu-CV/PAIF.
Related papers
- From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - Embracing Compact and Robust Architectures for Multi-Exposure Image
Fusion [50.598654017728045]
We propose a search-based paradigm, involving self-alignment and detail repletion modules for robust multi-exposure image fusion.
By utilizing scene relighting and deformable convolutions, the self-alignment module can accurately align images despite camera movement.
We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 4.02% and 29.34% improvement in PSNR for general and misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - Contrastive View Design Strategies to Enhance Robustness to Domain
Shifts in Downstream Object Detection [37.06088084592779]
We conduct an empirical study of contrastive learning and out-of-domain object detection.
We propose strategies to augment views and enhance robustness in appearance-shifted and context-shifted scenarios.
Our results and insights show how to ensure robustness through the choice of views in contrastive learning.
arXiv Detail & Related papers (2022-12-09T00:34:50Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Interactive Feature Embedding for Infrared and Visible Image Fusion [94.77188069479155]
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
arXiv Detail & Related papers (2022-11-09T13:34:42Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Contextual Fusion For Adversarial Robustness [0.0]
Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
arXiv Detail & Related papers (2020-11-18T20:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.