Visual-Friendly Concept Protection via Selective Adversarial Perturbations
- URL: http://arxiv.org/abs/2408.08518v1
- Date: Fri, 16 Aug 2024 04:14:28 GMT
- Title: Visual-Friendly Concept Protection via Selective Adversarial Perturbations
- Authors: Xiaoyue Mi, Fan Tang, Juan Cao, Peng Li, Yang Liu,
- Abstract summary: We propose the Visual-Friendly Concept Protection (VCPro) framework, which prioritizes the protection of key concepts chosen by the image owner.
To ensure these perturbations are as inconspicuous as possible, we introduce a relaxed optimization objective.
Experiments validate that VCPro achieves a better trade-off between the visibility of perturbations and protection effectiveness.
- Score: 23.780603071185197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized concept generation by tuning diffusion models with a few images raises potential legal and ethical concerns regarding privacy and intellectual property rights. Researchers attempt to prevent malicious personalization using adversarial perturbations. However, previous efforts have mainly focused on the effectiveness of protection while neglecting the visibility of perturbations. They utilize global adversarial perturbations, which introduce noticeable alterations to original images and significantly degrade visual quality. In this work, we propose the Visual-Friendly Concept Protection (VCPro) framework, which prioritizes the protection of key concepts chosen by the image owner through adversarial perturbations with lower perceptibility. To ensure these perturbations are as inconspicuous as possible, we introduce a relaxed optimization objective to identify the least perceptible yet effective adversarial perturbations, solved using the Lagrangian multiplier method. Qualitative and quantitative experiments validate that VCPro achieves a better trade-off between the visibility of perturbations and protection effectiveness, effectively prioritizing the protection of target concepts in images with less perceptible perturbations.
Related papers
- Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack [5.357486699062561]
We propose a novel and efficient adversarial attack method, Concept Protection by Selective Attention Manipulation (CoPSAM)
For this purpose, we carefully construct an imperceptible noise to be added to clean samples to get their adversarial counterparts.
Experimental validation on a subset of CelebA-HQ face images dataset demonstrates that our approach outperforms existing methods.
arXiv Detail & Related papers (2024-11-25T14:39:18Z) - Imperceptible Protection against Style Imitation from Diffusion Models [9.548195579003897]
We introduce a visually improved protection method while preserving its protection capability.
We devise a perceptual map to highlight areas sensitive to human eyes, guided by instance-aware refinement.
We also introduce a difficulty-aware protection by predicting how difficult the artwork is to protect and dynamically adjusting the intensity.
arXiv Detail & Related papers (2024-03-28T09:21:00Z) - SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models [16.505593270720034]
We propose an adaptive greedy search for optimal time steps that seamlessly integrates with existing anti-customization methods.
Our approach significantly increases identity disruption, thereby protecting user privacy and copyright.
arXiv Detail & Related papers (2023-12-13T03:04:22Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - A Perceptual Distortion Reduction Framework for Adversarial Perturbation
Generation [58.6157191438473]
We propose a perceptual distortion reduction framework to tackle this problem from two perspectives.
We propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate.
arXiv Detail & Related papers (2021-05-01T15:08:10Z) - Essential Features: Reducing the Attack Surface of Adversarial
Perturbations with Robust Content-Aware Image Preprocessing [5.831840281853604]
Adversaries can fool machine learning models into making incorrect predictions by adding perturbations to an image.
One approach to defending against such perturbations is to apply image preprocessing functions to remove the effects of the perturbation.
We propose a novel image preprocessing technique called Essential Features that transforms the image into a robust feature space.
arXiv Detail & Related papers (2020-12-03T04:40:51Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.