VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy
Leakage Fingerprints
- URL: http://arxiv.org/abs/2310.10656v1
- Date: Thu, 7 Sep 2023 01:58:12 GMT
- Title: VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy
Leakage Fingerprints
- Authors: Aoting Hu, Zhigang Lu, Renjie Xie, Minhui Xue
- Abstract summary: Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement.
We propose a novel ownership testing method called VeriDIP, which verifies a model's intellectual property.
- Score: 16.564206424838485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deploying Machine Learning as a Service gives rise to model plagiarism,
leading to copyright infringement. Ownership testing techniques are designed to
identify model fingerprints for verifying plagiarism. However, previous works
often rely on overfitting or robustness features as fingerprints, lacking
theoretical guarantees and exhibiting under-performance on generalized models.
In this paper, we propose a novel ownership testing method called VeriDIP,
which verifies a DNN model's intellectual property. VeriDIP makes two major
contributions. (1) It utilizes membership inference attacks to estimate the
lower bound of privacy leakage, which reflects the fingerprint of a given
model. The privacy leakage fingerprints highlight the unique patterns through
which the models memorize sensitive training datasets. (2) We introduce a novel
approach using less private samples to enhance the performance of ownership
testing.
Extensive experimental results confirm that VeriDIP is effective and
efficient in validating the ownership of deep learning models trained on both
image and tabular datasets. VeriDIP achieves comparable performance to
state-of-the-art methods on image datasets while significantly reducing
computation and communication costs. Enhanced VeriDIP demonstrates superior
verification performance on generalized deep learning models, particularly on
table-trained models. Additionally, VeriDIP exhibits similar effectiveness on
utility-preserving differentially private models compared to non-differentially
private baselines.
Related papers
- Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models [23.09033991200197]
New personalization techniques have been proposed to customize the pre-trained base models for crafting images with specific themes or styles.
Such a lightweight solution poses a new concern regarding whether the personalized models are trained from unauthorized data.
We introduce SIREN, a novel methodology to proactively trace unauthorized data usage in black-box personalized text-to-image diffusion models.
arXiv Detail & Related papers (2024-10-14T12:29:23Z) - IDEA: An Inverse Domain Expert Adaptation Based Active DNN IP Protection Method [8.717704777664604]
Illegitimate reproduction, distribution and derivation of Deep Neural Network (DNN) models can inflict economic loss, reputation damage and even privacy infringement.
We propose IDEA, an Inverse Domain Expert Adaptation based proactive DNN IP protection method featuring active authorization and source traceability.
We extensively evaluate IDEA on five datasets and four DNN models to demonstrate its effectiveness in authorization control, culprit tracing success rate, and against various attacks.
arXiv Detail & Related papers (2024-09-29T09:34:33Z) - EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations [73.94175015918059]
We introduce a novel approach, EnTruth, which Enhances Traceability of unauthorized dataset usage.
By strategically incorporating the template memorization, EnTruth can trigger the specific behavior in unauthorized models as the evidence of infringement.
Our method is the first to investigate the positive application of memorization and use it for copyright protection, which turns a curse into a blessing.
arXiv Detail & Related papers (2024-06-20T02:02:44Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - FPGAN-Control: A Controllable Fingerprint Generator for Training with
Synthetic Data [7.203557048672379]
We present FPGAN-Control, an identity preserving image generation framework.
We introduce a novel appearance loss that encourages disentanglement between the fingerprint's identity and appearance properties.
We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity level, degree of appearance control, and low synthetic-to-real domain gap.
arXiv Detail & Related papers (2023-10-29T14:30:01Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Leveraging Expert Models for Training Deep Neural Networks in Scarce
Data Domains: Application to Offline Handwritten Signature Verification [15.88604823470663]
The presented scheme is applied in offline handwritten signature verification (OffSV)
The proposed Student-Teacher (S-T) configuration utilizes feature-based knowledge distillation (FKD)
Remarkably, the models trained using this technique exhibit comparable, if not superior, performance to the teacher model across three popular signature datasets.
arXiv Detail & Related papers (2023-08-02T13:28:12Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - FedIPR: Ownership Verification for Federated Deep Neural Network Models [31.459374163080994]
Federated learning models must be protected against plagiarism since these models are built upon valuable training data owned by multiple institutions or people.
This paper illustrates a novel federated deep neural network (FedDNN) ownership verification scheme that allows ownership signatures to be embedded and verified to claim legitimate intellectual property rights (IPR) of FedDNN models.
arXiv Detail & Related papers (2021-09-27T12:51:24Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.