Vision-Language Model IP Protection via Prompt-based Learning
- URL: http://arxiv.org/abs/2503.02393v1
- Date: Tue, 04 Mar 2025 08:31:12 GMT
- Title: Vision-Language Model IP Protection via Prompt-based Learning
- Authors: Lianyu Wang, Meng Wang, Huazhu Fu, Daoqiang Zhang,
- Abstract summary: We introduce IP-CLIP, a lightweight IP protection strategy tailored to vision-language models (VLMs)<n>By leveraging the frozen visual backbone of CLIP, we extract both image style and content information, incorporating them into the learning of IP prompt.<n>This strategy acts as a robust barrier, effectively preventing the unauthorized transfer of features from authorized domains to unauthorized ones.
- Score: 52.783709712318405
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Vision-language models (VLMs) like CLIP (Contrastive Language-Image Pre-Training) have seen remarkable success in visual recognition, highlighting the increasing need to safeguard the intellectual property (IP) of well-trained models. Effective IP protection extends beyond ensuring authorized usage; it also necessitates restricting model deployment to authorized data domains, particularly when the model is fine-tuned for specific target domains. However, current IP protection methods often rely solely on the visual backbone, which may lack sufficient semantic richness. To bridge this gap, we introduce IP-CLIP, a lightweight IP protection strategy tailored to CLIP, employing a prompt-based learning approach. By leveraging the frozen visual backbone of CLIP, we extract both image style and content information, incorporating them into the learning of IP prompt. This strategy acts as a robust barrier, effectively preventing the unauthorized transfer of features from authorized domains to unauthorized ones. Additionally, we propose a style-enhancement branch that constructs feature banks for both authorized and unauthorized domains. This branch integrates self-enhanced and cross-domain features, further strengthening IP-CLIP's capability to block features from unauthorized domains. Finally, we present new three metrics designed to better balance the performance degradation of authorized and unauthorized domains. Comprehensive experiments in various scenarios demonstrate its promising potential for application in IP protection tasks for VLMs.
Related papers
- In the Era of Prompt Learning with Vision-Language Models [1.060608983034705]
We introduce textscStyLIP, a novel domain-agnostic prompt learning strategy for Domain Generalization (DG)
StyLIP disentangles visual style and content in CLIPs vision encoder by using style projectors to learn domain-specific prompt tokens.
We also propose AD-CLIP for unsupervised domain adaptation (DA), leveraging CLIPs frozen vision backbone.
arXiv Detail & Related papers (2024-11-07T17:31:21Z) - Say No to Freeloader: Protecting Intellectual Property of Your Deep Model [52.783709712318405]
Compact Un-transferable Pyramid Isolation Domain (CUPI-Domain) serves as a barrier against illegal transfers from authorized to unauthorized domains.
We propose CUPI-Domain generators, which select features from both authorized and CUPI-Domain as anchors.
We provide two solutions for utilizing CUPI-Domain based on whether the unauthorized domain is known.
arXiv Detail & Related papers (2024-08-23T15:34:33Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.<n> adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.<n> Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - EncryIP: A Practical Encryption-Based Framework for Model Intellectual
Property Protection [17.655627250882805]
This paper introduces a practical encryption-based framework called textitEncryIP.
It seamlessly integrates a public-key encryption scheme into the model learning process.
It demonstrates superior effectiveness in both training protected models and efficiently detecting the unauthorized spread of ML models.
arXiv Detail & Related papers (2023-12-19T11:11:03Z) - Adversarial Prompt Tuning for Vision-Language Models [86.5543597406173]
Adversarial Prompt Tuning (AdvPT) is a technique to enhance the adversarial robustness of image encoders in Vision-Language Models (VLMs)
We demonstrate that AdvPT improves resistance against white-box and black-box adversarial attacks and exhibits a synergistic effect when combined with existing image-processing-based defense techniques.
arXiv Detail & Related papers (2023-11-19T07:47:43Z) - Model Barrier: A Compact Un-Transferable Isolation Domain for Model
Intellectual Property Protection [52.08301776698373]
We propose a novel approach called Compact Un-Transferable Isolation Domain (CUTI-domain)
CUTI-domain acts as a barrier to block illegal transfers from authorized to unauthorized domains.
We show that CUTI-domain can be easily implemented as a plug-and-play module with different backbones.
arXiv Detail & Related papers (2023-03-20T13:07:11Z) - Copyright Protection and Accountability of Generative AI:Attack,
Watermarking and Attribution [7.0159295162418385]
We propose an evaluation framework to provide a comprehensive overview of the current state of the copyright protection measures for GANs.
Our findings indicate that the current intellectual property protection methods for input images, model watermarking, and attribution networks are largely satisfactory for a wide range of GANs.
arXiv Detail & Related papers (2023-03-15T06:40:57Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.