PRIME: Protect Your Videos From Malicious Editing
- URL: http://arxiv.org/abs/2402.01239v1
- Date: Fri, 2 Feb 2024 09:07:00 GMT
- Title: PRIME: Protect Your Videos From Malicious Editing
- Authors: Guanlin Li, Shuai Yang, Jie Zhang, Tianwei Zhang
- Abstract summary: generative models have made it surprisingly easy to manipulate and edit photos and videos, with just a few simple prompts.
We introduce our protection method, PRIME, to significantly reduce the time cost and improve the protection performance.
Our evaluation results indicate that PRIME only costs 8.3% GPU hours of the cost of the previous state-of-the-art method.
- Score: 21.38790858842751
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of generative models, the quality of generated content
keeps increasing. Recently, open-source models have made it surprisingly easy
to manipulate and edit photos and videos, with just a few simple prompts. While
these cutting-edge technologies have gained popularity, they have also given
rise to concerns regarding the privacy and portrait rights of individuals.
Malicious users can exploit these tools for deceptive or illegal purposes.
Although some previous works focus on protecting photos against generative
models, we find there are still gaps between protecting videos and images in
the aspects of efficiency and effectiveness. Therefore, we introduce our
protection method, PRIME, to significantly reduce the time cost and improve the
protection performance. Moreover, to evaluate our proposed protection method,
we consider both objective metrics and human subjective metrics. Our evaluation
results indicate that PRIME only costs 8.3% GPU hours of the cost of the
previous state-of-the-art method and achieves better protection results on both
human evaluation and objective metrics. Code can be found in
https://github.com/GuanlinLee/prime.
Related papers
- Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing [19.94455452402954]
FaceLock is a novel approach to portrait protection that optimize adversarial perturbations to destroy or significantly alter biometric information.
Our work advances biometric defense and sets the foundation for privacy-preserving practices in image editing.
arXiv Detail & Related papers (2024-11-25T18:59:03Z) - CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models [30.618794027527695]
We develop CopyrightMeter, a unified evaluation framework that incorporates 17 state-of-the-art protections and 16 representative attacks.
Our analysis reveals several key findings: (i) most protections (16/17) are not resilient against attacks; (ii) the "best" protection varies depending on the target priority; (iii) more advanced attacks significantly promote the upgrading of protections.
arXiv Detail & Related papers (2024-11-20T09:19:10Z) - DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing [93.45507533317405]
DiffusionGuard is a robust and effective defense method against unauthorized edits by diffusion-based image editing models.
We introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process.
We also introduce a mask-augmentation technique to enhance robustness against various masks during test time.
arXiv Detail & Related papers (2024-10-08T05:19:19Z) - PixelFade: Privacy-preserving Person Re-identification with Noise-guided Progressive Replacement [41.05432008027312]
Online person re-identification services privacy breaches from potential data leakage recovery attacks.
Previous privacy-preserving person re-identification methods are unable to resist recovery attacks and compromise accuracy.
We propose an iterative (PixelFade) method to protect pedestrian images.
arXiv Detail & Related papers (2024-08-10T12:52:54Z) - Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI [61.35083814817094]
Several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online.
We find that low-effort and "off-the-shelf" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections.
We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI.
arXiv Detail & Related papers (2024-06-17T18:51:45Z) - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via
Adversarial Latent Search [10.16904417057085]
Deep learning based face recognition systems can enable unauthorized tracking of users in the digital world.
Existing methods for enhancing privacy fail to generate naturalistic images that can protect facial privacy without compromising user experience.
We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low-dimensional manifold of a pretrained generative model.
arXiv Detail & Related papers (2023-06-16T17:58:15Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Scapegoat Generation for Privacy Protection from Deepfake [21.169776378130635]
We propose a new problem formulation for deepfake prevention: generating a scapegoat image'' by modifying the style of the original input.
Even in the case of malicious deepfake, the privacy of the users is still protected.
arXiv Detail & Related papers (2023-03-06T06:52:00Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.