CHIP: Chameleon Hash-based Irreversible Passport for Robust Deep Model Ownership Verification and Active Usage Control
- URL: http://arxiv.org/abs/2505.24536v1
- Date: Fri, 30 May 2025 12:41:51 GMT
- Title: CHIP: Chameleon Hash-based Irreversible Passport for Robust Deep Model Ownership Verification and Active Usage Control
- Authors: Chaohui Xu, Qi Cui, Chip-Hong Chang,
- Abstract summary: The pervasion of large-scale Deep Neural Networks (DNNs) and their enormous training costs make their intellectual property (IP) protection of paramount importance.<n>Recently introduced passport-based methods attempt to steer watermarking towards strengthening ownership verification against ambiguity attacks.<n>Unfortunately, neither watermarking nor passport-based methods provide a holistic protection with robust ownership proof, high fidelity, active usage authorization and user traceability.<n>We propose a Chameleon Hash-based Irreversible Passport (CHIP) protection framework that utilizes the cryptographic chameleon hash function to achieve all these goals.
- Score: 10.038457823205853
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The pervasion of large-scale Deep Neural Networks (DNNs) and their enormous training costs make their intellectual property (IP) protection of paramount importance. Recently introduced passport-based methods attempt to steer DNN watermarking towards strengthening ownership verification against ambiguity attacks by modulating the affine parameters of normalization layers. Unfortunately, neither watermarking nor passport-based methods provide a holistic protection with robust ownership proof, high fidelity, active usage authorization and user traceability for offline access distributed models and multi-user Machine-Learning as a Service (MLaaS) cloud model. In this paper, we propose a Chameleon Hash-based Irreversible Passport (CHIP) protection framework that utilizes the cryptographic chameleon hash function to achieve all these goals. The collision-resistant property of chameleon hash allows for strong model ownership claim upon IP infringement and liable user traceability, while the trapdoor-collision property enables hashing of multiple user passports and licensee certificates to the same immutable signature to realize active usage control. Using the owner passport as an oracle, multiple user-specific triplets, each contains a passport-aware user model, a user passport, and a licensee certificate can be created for secure offline distribution. The watermarked master model can also be deployed for MLaaS with usage permission verifiable by the provision of any trapdoor-colliding user passports. CHIP is extensively evaluated on four datasets and two architectures to demonstrate its protection versatility and robustness. Our code is released at https://github.com/Dshm212/CHIP.
Related papers
- PCDiff: Proactive Control for Ownership Protection in Diffusion Models with Watermark Compatibility [23.64920988914223]
PCDiff is a proactive access control framework that redefines model authorization by regulating generation quality.<n>PCDIFF integrates a trainable fuser module and hierarchical authentication layers into the decoder architecture.
arXiv Detail & Related papers (2025-04-16T05:28:50Z) - Vision-Language Model IP Protection via Prompt-based Learning [52.783709712318405]
We introduce IP-CLIP, a lightweight IP protection strategy tailored to vision-language models (VLMs)<n>By leveraging the frozen visual backbone of CLIP, we extract both image style and content information, incorporating them into the learning of IP prompt.<n>This strategy acts as a robust barrier, effectively preventing the unauthorized transfer of features from authorized domains to unauthorized ones.
arXiv Detail & Related papers (2025-03-04T08:31:12Z) - AuthNet: Neural Network with Integrated Authentication Logic [19.56843040375779]
We propose a native authentication mechanism, called AuthNet, which integrates authentication logic as part of the model.
AuthNet is compatible with any convolutional neural network, where our evaluations show that AuthNet successfully achieves the goal in rejecting unauthenticated users.
arXiv Detail & Related papers (2024-05-24T10:44:22Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Steganographic Passport: An Owner and User Verifiable Credential for Deep Model IP Protection Without Retraining [9.617679554145301]
Current passport-based methods that obfuscate model functionality for license-to-use and ownership verifications suffer from capacity and quality constraints.
We propose Steganographic Passport, which uses an invertible steganographic network to decouple license-to-use from ownership verification.
An irreversible and collision-resistant hash function is used to avoid exposing the owner-side passport from the derived user-side passports.
arXiv Detail & Related papers (2024-04-03T17:44:02Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - PASS: Protected Attribute Suppression System for Mitigating Bias in Face
Recognition [55.858374644761525]
Face recognition networks encode information about sensitive attributes while being trained for identity classification.
Existing bias mitigation approaches require end-to-end training and are unable to achieve high verification accuracy.
We present a descriptors-based adversarial de-biasing approach called Protected Attribute Suppression System ( PASS)'
Pass can be trained on top of descriptors obtained from any previously trained high-performing network to classify identities and simultaneously reduce encoding of sensitive attributes.
arXiv Detail & Related papers (2021-08-09T00:39:22Z) - ActiveGuard: An Active DNN IP Protection Technique via Adversarial
Examples [10.058070050660104]
ActiveGuard exploits adversarial examples as users' fingerprints to distinguish authorized users from unauthorized users.
For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.
arXiv Detail & Related papers (2021-03-02T07:16:20Z) - Passport-aware Normalization for Deep Model Protection [122.61289882357022]
We propose a new passport-aware normalization formulation for deep learning models.
It only needs to add another passport-aware branch for IP protection.
It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks.
arXiv Detail & Related papers (2020-10-29T17:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.