Operation-wise Attention Network for Tampering Localization Fusion
- URL: http://arxiv.org/abs/2105.05515v2
- Date: Thu, 13 May 2021 10:01:46 GMT
- Title: Operation-wise Attention Network for Tampering Localization Fusion
- Authors: Polychronis Charitidis, Giorgos Kordopatis-Zilos, Symeon Papadopoulos,
Ioannis Kompatsiaris
- Abstract summary: In this work, we present a deep learning-based approach for image tampering localization fusion.
This approach is designed to combine the outcomes of multiple image forensics algorithms and provides a fused tampering localization map.
Our fusion framework includes a set of five individual tampering localization methods for splicing localization on JPEG images.
- Score: 15.633461635276337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present a deep learning-based approach for image tampering
localization fusion. This approach is designed to combine the outcomes of
multiple image forensics algorithms and provides a fused tampering localization
map, which requires no expert knowledge and is easier to interpret by end
users. Our fusion framework includes a set of five individual tampering
localization methods for splicing localization on JPEG images. The proposed
deep learning fusion model is an adapted architecture, initially proposed for
the image restoration task, that performs multiple operations in parallel,
weighted by an attention mechanism to enable the selection of proper operations
depending on the input signals. This weighting process can be very beneficial
for cases where the input signal is very diverse, as in our case where the
output signals of multiple image forensics algorithms are combined. Evaluation
in three publicly available forensics datasets demonstrates that the
performance of the proposed approach is competitive, outperforming the
individual forensics techniques as well as another recently proposed fusion
framework in the majority of cases.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - From Text to Pixels: A Context-Aware Semantic Synergy Solution for
Infrared and Visible Image Fusion [66.33467192279514]
We introduce a text-guided multi-modality image fusion method that leverages the high-level semantics from textual descriptions to integrate semantics from infrared and visible images.
Our method not only produces visually superior fusion results but also achieves a higher detection mAP over existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-12-31T08:13:47Z) - Multi-scale Target-Aware Framework for Constrained Image Splicing
Detection and Localization [11.803255600587308]
We propose a multi-scale target-aware framework to couple feature extraction and correlation matching in a unified pipeline.
Our approach can effectively promote the collaborative learning of related patches, and perform mutual promotion of feature learning and correlation matching.
Our experiments demonstrate that our model, which uses a unified pipeline, outperforms state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2023-08-18T07:38:30Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - TriPINet: Tripartite Progressive Integration Network for Image
Manipulation Localization [3.7359400978194675]
We propose a tripartite progressive integration network (TriPINet) for end-to-end image manipulation localization.
We develop a guided cross-modality dual-attention (gCMDA) module to fuse different types of forged clues.
Extensive experiments are conducted to compare our method with state-of-the-art image forensics approaches.
arXiv Detail & Related papers (2022-12-25T02:27:58Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - TransFuse: A Unified Transformer-based Image Fusion Framework using
Self-supervised Learning [5.849513679510834]
Image fusion is a technique to integrate information from multiple source images with complementary information to improve the richness of a single image.
Two-stage methods avoid the need of large amount of task-specific training data by training encoder-decoder network on large natural image datasets.
We propose a destruction-reconstruction based self-supervised training scheme to encourage the network to learn task-specific features.
arXiv Detail & Related papers (2022-01-19T07:30:44Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - WaveFuse: A Unified Deep Framework for Image Fusion with Discrete
Wavelet Transform [8.164433158925593]
This is the first time the conventional image fusion method has been combined with deep learning.
The proposed algorithm exhibits better fusion performance in both subjective and objective evaluation.
arXiv Detail & Related papers (2020-07-28T10:30:47Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.