LOCKEY: A Novel Approach to Model Authentication and Deepfake Tracking
- URL: http://arxiv.org/abs/2409.07743v2
- Date: Sat, 21 Sep 2024 13:41:01 GMT
- Title: LOCKEY: A Novel Approach to Model Authentication and Deepfake Tracking
- Authors: Mayank Kumar Singh, Naoya Takahashi, Wei-Hsiang Liao, Yuki Mitsufuji,
- Abstract summary: We present a novel approach to deter unauthorized deepfakes and enable user tracking in generative models.
Our method involves providing users with model parameters accompanied by a unique, user-specific key.
For user tracking, the model embeds the user's unique key as a watermark within the generated content.
- Score: 26.559909295466586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel approach to deter unauthorized deepfakes and enable user tracking in generative models, even when the user has full access to the model parameters, by integrating key-based model authentication with watermarking techniques. Our method involves providing users with model parameters accompanied by a unique, user-specific key. During inference, the model is conditioned upon the key along with the standard input. A valid key results in the expected output, while an invalid key triggers a degraded output, thereby enforcing key-based model authentication. For user tracking, the model embeds the user's unique key as a watermark within the generated content, facilitating the identification of the user's ID. We demonstrate the effectiveness of our approach on two types of models, audio codecs and vocoders, utilizing the SilentCipher watermarking method. Additionally, we assess the robustness of the embedded watermarks against various distortions, validating their reliability in various scenarios.
Related papers
- CBW: Towards Dataset Ownership Verification for Speaker Verification via Clustering-based Backdoor Watermarking [85.68235482145091]
Large-scale speech datasets have become valuable intellectual property.
We propose a novel dataset ownership verification method.
Our approach introduces a clustering-based backdoor watermark (CBW)
We conduct extensive experiments on benchmark datasets, verifying the effectiveness and robustness of our method against potential adaptive attacks.
arXiv Detail & Related papers (2025-03-02T02:02:57Z) - A Review of Several Keystroke Dynamics Methods [0.0]
Keystroke dynamics is a behavioral biometric that captures an individual's typing patterns for authentication and security applications.
This paper presents a comparative analysis of keystroke authentication models using Gaussian Mixture Models (GMM), Mahalanobis Distance-based Classification, and Gunetti Picardi's Distance Metrics.
arXiv Detail & Related papers (2025-02-22T10:45:44Z) - Sample Correlation for Fingerprinting Deep Face Recognition [83.53005932513156]
We propose a novel model stealing detection method based on SA Corremplelation (SAC)
SAC successfully defends against various model stealing attacks in deep face recognition, encompassing face verification and face emotion recognition, exhibiting the highest performance in terms of AUC, p-value and F1 score.
We extend our evaluation of SAC-JC to object recognition including Tiny-ImageNet and CIFAR10, which also demonstrates the superior performance of SAC-JC to previous methods.
arXiv Detail & Related papers (2024-12-30T07:37:06Z) - Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)
TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.
Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - A Watermark-Conditioned Diffusion Model for IP Protection [31.969286898467985]
We propose a unified watermarking framework for content copyright protection within the context of diffusion models.
To tackle this challenge, we propose a Watermark-conditioned Diffusion model called WaDiff.
Our method is effective and robust in both the detection and owner identification tasks.
arXiv Detail & Related papers (2024-03-16T11:08:15Z) - TokenMark: A Modality-Agnostic Watermark for Pre-trained Transformers [67.57928750537185]
TokenMark is a robust, modality-agnostic, robust watermarking system for pre-trained models.
It embeds the watermark by fine-tuning the pre-trained model on a set of specifically permuted data samples.
It significantly improves the robustness, efficiency, and universality of model watermarking.
arXiv Detail & Related papers (2024-03-09T08:54:52Z) - Probabilistically Robust Watermarking of Neural Networks [4.332441337407564]
We introduce a novel trigger set-based watermarking approach that demonstrates resilience against functionality stealing attacks.
Our approach does not require additional model training and can be applied to any model architecture.
arXiv Detail & Related papers (2024-01-16T10:32:13Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Watermarking for Out-of-distribution Detection [76.20630986010114]
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
We propose a general methodology named watermarking in this paper.
We learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking.
arXiv Detail & Related papers (2022-10-27T06:12:32Z) - DynaMarks: Defending Against Deep Learning Model Extraction Using
Dynamic Watermarking [3.282282297279473]
The functionality of a deep learning (DL) model can be stolen via model extraction.
We propose a novel watermarking technique called DynaMarks to protect the intellectual property (IP) of DL models.
arXiv Detail & Related papers (2022-07-27T06:49:39Z) - Non-Transferable Learning: A New Approach for Model Verification and
Authorization [7.686781778077341]
There are two common protection methods: ownership verification and usage authorization.
We propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model.
Our NTL-based authorization approach provides data-centric usage protection by significantly degrading the performance of usage on unauthorized data.
arXiv Detail & Related papers (2021-06-13T04:57:16Z) - Decentralized Attribution of Generative Models [35.80513184958743]
Decentralized attribution relies on binary classifiers associated with each user-end model.
We develop sufficient conditions of the keys that guarantee an attributability lower bound.
Our method is validated on MNIST, CelebA, and FFHQ datasets.
arXiv Detail & Related papers (2020-10-27T01:03:45Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.