Learning to Immunize Images for Tamper Localization and Self-Recovery
- URL: http://arxiv.org/abs/2210.15902v1
- Date: Fri, 28 Oct 2022 05:16:56 GMT
- Title: Learning to Immunize Images for Tamper Localization and Self-Recovery
- Authors: Qichao Ying, Hang Zhou, Zhenxing Qian, Sheng Li, Xinpeng Zhang
- Abstract summary: Image immunization (Imuge) is a technology of protecting the images by introducing trivial perturbation.
This paper presents Imuge+, an enhanced scheme for image immunization.
- Score: 30.185576617722713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital images are vulnerable to nefarious tampering attacks such as content
addition or removal that severely alter the original meaning. It is somehow
like a person without protection that is open to various kinds of viruses.
Image immunization (Imuge) is a technology of protecting the images by
introducing trivial perturbation, so that the protected images are immune to
the viruses in that the tampered contents can be auto-recovered. This paper
presents Imuge+, an enhanced scheme for image immunization. By observing the
invertible relationship between image immunization and the corresponding
self-recovery, we employ an invertible neural network to jointly learn image
immunization and recovery respectively in the forward and backward pass. We
also introduce an efficient attack layer that involves both malicious tamper
and benign image post-processing, where a novel distillation-based JPEG
simulator is proposed for improved JPEG robustness. Our method achieves
promising results in real-world tests where experiments show accurate tamper
localization as well as high-fidelity content recovery. Additionally, we show
superior performance on tamper localization compared to state-of-the-art
schemes based on passive forensics.
Related papers
- StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Robust Image Protection Countering Cropping Manipulation [30.185576617722713]
This paper presents a novel robust watermarking scheme for image Cropping localization and Recovery (CLR-Net)
We first protect the original image by introducing imperceptible perturbations. Then, typical image post-processing attacks are simulated to erode the protected image.
On the recipient's side, we predict the cropping mask and recover the original image.
arXiv Detail & Related papers (2022-06-06T07:26:29Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - From Image to Imuge: Immunized Image Generation [23.430377385327308]
Imuge is an image tamper resilient generative scheme for image self-recovery.
We jointly train a U-Net backboned encoder, a tamper localization network and a decoder for image recovery.
We demonstrate that our method can recover the details of the tampered regions with a high quality despite the presence of various kinds of attacks.
arXiv Detail & Related papers (2021-10-27T05:56:15Z) - Hiding Images into Images with Real-world Robustness [21.328984859163956]
We introduce a generative network based method for hiding images into images while assuring high-quality extraction.
An embedding network is sequentially decoupling with an attack layer, a decoupling network and an image extraction network.
We are the first to robustly hide three secret images.
arXiv Detail & Related papers (2021-10-12T02:20:34Z) - Image Transformation Network for Privacy-Preserving Deep Neural Networks
and Its Security Evaluation [17.134566958534634]
We propose a transformation network for generating visually-protected images for privacy-preserving DNNs.
The proposed network enables us not only to strongly protect visual information but also to maintain the image classification accuracy that using plain images achieves.
arXiv Detail & Related papers (2020-08-07T12:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.