SISL:Self-Supervised Image Signature Learning for Splicing Detection and
Localization
- URL: http://arxiv.org/abs/2203.07824v1
- Date: Tue, 15 Mar 2022 12:26:29 GMT
- Title: SISL:Self-Supervised Image Signature Learning for Splicing Detection and
Localization
- Authors: Susmit Agrawal, Prabhat Kumar, Siddharth Seth, Toufiq Parag, Maneesh
Singh, Venkatesh Babu
- Abstract summary: We propose self-supervised approach for training splicing detection/localization models from frequency transforms of images.
Our proposed model can yield similar or better performances on standard datasets without relying on labels or metadata.
- Score: 11.437760125881049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent algorithms for image manipulation detection almost exclusively use
deep network models. These approaches require either dense pixelwise
groundtruth masks, camera ids, or image metadata to train the networks. On one
hand, constructing a training set to represent the countless tampering
possibilities is impractical. On the other hand, social media platforms or
commercial applications are often constrained to remove camera ids as well as
metadata from images. A self-supervised algorithm for training manipulation
detection models without dense groundtruth or camera/image metadata would be
extremely useful for many forensics applications. In this paper, we propose
self-supervised approach for training splicing detection/localization models
from frequency transforms of images. To identify the spliced regions, our deep
network learns a representation to capture an image specific signature by
enforcing (image) self consistency . We experimentally demonstrate that our
proposed model can yield similar or better performances of multiple existing
methods on standard datasets without relying on labels or metadata.
Related papers
- Transductive Learning for Near-Duplicate Image Detection in Scanned Photo Collections [0.0]
This paper presents a comparative study of near-duplicate image detection techniques in a real-world use case scenario.
We propose a transductive learning approach that leverages state-of-the-art deep learning architectures such as convolutional neural networks (CNNs) and Vision Transformers (ViTs)
The results show that the proposed approach outperforms the baseline methods in the task of near-duplicate image detection in the UKBench and an in-house private dataset.
arXiv Detail & Related papers (2024-10-25T09:56:15Z) - On the Effectiveness of Dataset Alignment for Fake Image Detection [28.68129042301801]
A good detector should focus on the generative models fingerprints while ignoring image properties such as semantic content, resolution, file format, etc.
In this work, we argue that in addition to these algorithmic choices, we also require a well aligned dataset of real/fake images to train a robust detector.
For the family of LDMs, we propose a very simple way to achieve this: we reconstruct all the real images using the LDMs autoencoder, without any denoising operation. We then train a model to separate these real images from their reconstructions.
arXiv Detail & Related papers (2024-10-15T17:58:07Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Learning to Detect Good Keypoints to Match Non-Rigid Objects in RGB
Images [7.428474910083337]
We present a novel learned keypoint detection method designed to maximize the number of correct matches for the task of non-rigid image correspondence.
Our training framework uses true correspondences, obtained by matching annotated image pairs with a predefined descriptor extractor, as a ground-truth to train a convolutional neural network (CNN)
Experiments show that our method outperforms the state-of-the-art keypoint detector on real images of non-rigid objects by 20 p.p. on Mean Matching Accuracy.
arXiv Detail & Related papers (2022-12-13T11:59:09Z) - Semantic decoupled representation learning for remote sensing image
change detection [17.548248093344576]
We propose a semantic decoupled representation learning for RS image CD.
We disentangle representations of different semantic regions by leveraging the semantic mask.
We additionally force the model to distinguish different semantic representations, which benefits the recognition of objects of interest in the downstream CD task.
arXiv Detail & Related papers (2022-01-15T07:35:26Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z) - Swapping Autoencoder for Deep Image Manipulation [94.33114146172606]
We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation.
The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image.
Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
arXiv Detail & Related papers (2020-07-01T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.