Diffusion models meet image counter-forensics
- URL: http://arxiv.org/abs/2311.13629v2
- Date: Mon, 15 Jan 2024 13:18:19 GMT
- Title: Diffusion models meet image counter-forensics
- Authors: Mat\'ias Tailanian, Marina Gardella, \'Alvaro Pardo, Pablo Mus\'e
- Abstract summary: We show that diffusion purification methods are well suited for counter-forensics tasks.
Such approaches outperform already existing counter-forensics techniques both in deceiving forensics methods and in preserving the natural look of the purified images.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From its acquisition in the camera sensors to its storage, different
operations are performed to generate the final image. This pipeline imprints
specific traces into the image to form a natural watermark. Tampering with an
image disturbs these traces; these disruptions are clues that are used by most
methods to detect and locate forgeries. In this article, we assess the
capabilities of diffusion models to erase the traces left by forgers and,
therefore, deceive forensics methods. Such an approach has been recently
introduced for adversarial purification, achieving significant performance. We
show that diffusion purification methods are well suited for counter-forensics
tasks. Such approaches outperform already existing counter-forensics techniques
both in deceiving forensics methods and in preserving the natural look of the
purified images. The source code is publicly available at
https://github.com/mtailanian/diff-cf.
Related papers
- Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models [10.726987194250116]
We introduce Shallow Diffuse, a new watermarking technique that embeds robust and invisible watermarks into diffusion model outputs.
Our theoretical and empirical analyses show that Shallow Diffuse greatly enhances the consistency of data generation and the detectability of the watermark.
arXiv Detail & Related papers (2024-10-28T14:51:04Z) - Back-in-Time Diffusion: Unsupervised Detection of Medical Deepfakes [3.2720947374803777]
We propose a novel anomaly detector for medical imagery based on diffusion models.
We show how a similar process can be used to detect synthetic content by making a model reverse the diffusion on a suspected image.
Our method significantly outperforms other state of the art unsupervised detectors with an increased AUC of 0.9 from 0.79 for injection and of 0.96 from 0.91 for removal.
arXiv Detail & Related papers (2024-07-21T13:58:43Z) - Deep Image Restoration For Image Anti-Forensics [0.0]
JPEG compression, blurring and noising have long been used for anti-forensics.
They make it difficult to detect fake images and are used for data augmentation in training deep image forgery detection models.
Separate image forensics methods have also been developed to detect these traces.
In this study, we go one step further and improve the image quality after these methods with deep image restoration models.
arXiv Detail & Related papers (2024-05-04T20:49:06Z) - Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models [71.13610023354967]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.
We propose a diffusion model watermarking technique that is both performance-lossless and training-free.
arXiv Detail & Related papers (2024-04-07T13:30:10Z) - Robustness of AI-Image Detectors: Fundamental Limits and Practical
Attacks [47.04650443491879]
We analyze the robustness of various AI-image detectors including watermarking and deepfake detectors.
We show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones.
arXiv Detail & Related papers (2023-09-29T18:30:29Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - Detecting and Localizing Copy-Move and Image-Splicing Forgery [0.0]
We focus on the methods to detect if an image has been tampered with using both Deep Learning and Image transformation methods.
We then attempt to identify the tampered area of the image and predict the corresponding mask.
Based on the results, suggestions and approaches are provided to achieve a more robust framework to detect and identify the forgeries.
arXiv Detail & Related papers (2022-02-08T01:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.