Authorize-on-Demand: Dynamic Authorization with Legality-Aware Intellectual Property Protection for VLMs
- URL: http://arxiv.org/abs/2603.04896v1
- Date: Thu, 05 Mar 2026 07:36:07 GMT
- Title: Authorize-on-Demand: Dynamic Authorization with Legality-Aware Intellectual Property Protection for VLMs
- Authors: Lianyu Wang, Meng Wang, Huazhu Fu, Daoqiang Zhang,
- Abstract summary: AoD-IP is a framework that supports authorize-on-demand and legality-aware assessment.<n>AoD-IP maintains strong authorized-domain performance and reliable unauthorized detection.
- Score: 70.09137776277994
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid adoption of vision-language models (VLMs) has heightened the demand for robust intellectual property (IP) protection of these high-value pretrained models. Effective IP protection should proactively confine model deployment within authorized domains and prevent unauthorized transfers. However, existing methods rely on static training-time definitions, limiting flexibility in dynamic environments and often producing opaque responses to unauthorized inputs. To address these limitations, we propose a novel dynamic authorization with legality-aware intellectual property protection (AoD-IP) for VLMs, a framework that supports authorize-on-demand and legality-aware assessment. AoD-IP introduces a lightweight dynamic authorization module that enables flexible, user-controlled authorization, allowing users to actively specify or switch authorized domains on demand at deployment time. This enables the model to adapt seamlessly as application scenarios evolve and provides substantially greater extensibility than existing static-domain approaches. In addition, AoD-IP incorporates a dual-path inference mechanism that jointly predicts input legality-aware and task-specific outputs. Comprehensive experimental results on multiple cross-domain benchmarks demonstrate that AoD-IP maintains strong authorized-domain performance and reliable unauthorized detection, while supporting user-controlled authorization for adaptive deployment in dynamic environments.
Related papers
- CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks [54.04030169323115]
We introduce CREDIT, a certified ownership verification against Model Extraction Attacks (MEAs)<n>We quantify the similarity between DNN models, propose a practical verification threshold, and provide rigorous theoretical guarantees for ownership verification based on this threshold.<n>We extensively evaluate our approach on several mainstream datasets across different domains and tasks, achieving state-of-the-art performance.
arXiv Detail & Related papers (2026-02-23T23:36:25Z) - Autonomous Action Runtime Management(AARM):A System Specification for Securing AI-Driven Actions at Runtime [0.0]
This paper introduces Autonomous Action Management (AARM), an open specification for securing AI-driven actions at runtime.<n>AARM intercepts actions before execution, accumulates session context, evaluates against policy and intent alignment, enforces authorization decisions, and records tamper-evident receipts for forensic reconstruction.<n>AARM is model-agnostic, framework-agnostic, and vendor-neutral, treating action execution as the stable security boundary.
arXiv Detail & Related papers (2026-02-10T05:57:30Z) - Steering Vision-Language Pre-trained Models for Incremental Face Presentation Attack Detection [62.89126207012712]
Face Presentation Attack Detection (PAD) demands incremental learning to combat spoofing tactics and domains.<n>Privacy regulations forbid retaining past data, necessitating rehearsal-free learning (RF-IL)
arXiv Detail & Related papers (2025-12-22T04:30:11Z) - RobIA: Robust Instance-aware Continual Test-time Adaptation for Deep Stereo [18.836469118006594]
RobIA is a novel Robust, Instance-Aware framework for Continual Test-Time Adaptation in stereo depth estimation.<n>RobIA integrates two key components: (1) Attend-and-Excite Mixture-of-Experts (AttEx-MoE), a parameter-efficient module that dynamically routes input to frozen experts via lightweight self-attention mechanism tailored to epipolar geometry, and (2) Robust AdaptBN Teacher, a PEFT-based teacher model that provides dense pseudo-supervision by complementing sparse handcrafted labels.
arXiv Detail & Related papers (2025-11-13T09:13:12Z) - DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents [52.92354372596197]
Large Language Models (LLMs) are increasingly central to agentic systems due to their strong reasoning and planning capabilities.<n>This interaction also introduces the risk of prompt injection attacks, where malicious inputs from external sources can mislead the agent's behavior.<n>We propose a Dynamic Rule-based Isolation Framework for Trustworthy agentic systems, which enforces both control and data-level constraints.
arXiv Detail & Related papers (2025-06-13T05:01:09Z) - RADEP: A Resilient Adaptive Defense Framework Against Model Extraction Attacks [6.6680585862156105]
We introduce a Resilient Adaptive Defense Framework for Model Extraction Attack Protection (RADEP)<n>RADEP employs progressive adversarial training to enhance model resilience against extraction attempts.<n> Ownership verification is enforced through embedded watermarking and backdoor triggers.
arXiv Detail & Related papers (2025-05-25T23:28:05Z) - PCDiff: Proactive Control for Ownership Protection in Diffusion Models with Watermark Compatibility [23.64920988914223]
PCDiff is a proactive access control framework that redefines model authorization by regulating generation quality.<n>PCDIFF integrates a trainable fuser module and hierarchical authentication layers into the decoder architecture.
arXiv Detail & Related papers (2025-04-16T05:28:50Z) - Vision-Language Model IP Protection via Prompt-based Learning [52.783709712318405]
We introduce IP-CLIP, a lightweight IP protection strategy tailored to vision-language models (VLMs)<n>By leveraging the frozen visual backbone of CLIP, we extract both image style and content information, incorporating them into the learning of IP prompt.<n>This strategy acts as a robust barrier, effectively preventing the unauthorized transfer of features from authorized domains to unauthorized ones.
arXiv Detail & Related papers (2025-03-04T08:31:12Z) - Model Barrier: A Compact Un-Transferable Isolation Domain for Model
Intellectual Property Protection [52.08301776698373]
We propose a novel approach called Compact Un-Transferable Isolation Domain (CUTI-domain)
CUTI-domain acts as a barrier to block illegal transfers from authorized to unauthorized domains.
We show that CUTI-domain can be easily implemented as a plug-and-play module with different backbones.
arXiv Detail & Related papers (2023-03-20T13:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.