Invertible Image Dataset Protection
- URL: http://arxiv.org/abs/2112.14420v1
- Date: Wed, 29 Dec 2021 06:56:43 GMT
- Title: Invertible Image Dataset Protection
- Authors: Kejiang Chen, Xianhan Zeng, Qichao Ying, Sheng Li, Zhenxing Qian and
Xinpeng Zhang
- Abstract summary: We develop a reversible adversarial example generator (RAEG) that introduces slight changes to the images to fool traditional classification models.
RAEG can better protect the data with slight distortion against adversarial defense than previous methods.
- Score: 23.688878249633508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has achieved enormous success in various industrial
applications. Companies do not want their valuable data to be stolen by
malicious employees to train pirated models. Nor do they wish the data analyzed
by the competitors after using them online. We propose a novel solution for
dataset protection in this scenario by robustly and reversibly transform the
images into adversarial images. We develop a reversible adversarial example
generator (RAEG) that introduces slight changes to the images to fool
traditional classification models. Even though malicious attacks train pirated
models based on the defensed versions of the protected images, RAEG can
significantly weaken the functionality of these models. Meanwhile, the
reversibility of RAEG ensures the performance of authorized models. Extensive
experiments demonstrate that RAEG can better protect the data with slight
distortion against adversarial defense than previous methods.
Related papers
- Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights [0.10878040851638002]
We study the issue of privacy leakage of a fine-tuned diffusion model in a practical setting.
An adversary can generate images containing the same identities as the private images.
arXiv Detail & Related papers (2024-09-13T02:13:26Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - Protect Federated Learning Against Backdoor Attacks via Data-Free
Trigger Generation [25.072791779134]
Federated Learning (FL) enables large-scale clients to collaboratively train a model without sharing their raw data.
Due to the lack of data auditing for untrusted clients, FL is vulnerable to poisoning attacks, especially backdoor attacks.
We propose a novel data-free trigger-generation-based defense approach based on the two characteristics of backdoor attacks.
arXiv Detail & Related papers (2023-08-22T10:16:12Z) - Pelta: Shielding Transformers to Mitigate Evasion Attacks in Federated
Learning [0.6445605125467573]
We introduce Pelta, a novel shielding mechanism leveraging trusted hardware.
We evaluate Pelta on a state of the art ensemble model and demonstrate its effectiveness against the Self Attention Gradient adversarial attack.
arXiv Detail & Related papers (2023-08-08T16:22:44Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation [25.55296442023984]
We propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation.
This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content.
arXiv Detail & Related papers (2023-06-02T20:19:19Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.