SLIP-SEC: Formalizing Secure Protocols for Model IP Protection
- URL: http://arxiv.org/abs/2510.24999v1
- Date: Tue, 28 Oct 2025 21:59:11 GMT
- Title: SLIP-SEC: Formalizing Secure Protocols for Model IP Protection
- Authors: Racchit Jain, Satya Lokam, Yehonathan Refael, Adam Hakim, Lev Greenberg, Jay Tenenbaum,
- Abstract summary: Large Language Models (LLMs) represent valuable intellectual property (IP)<n>deploying these models on partially trusted or insecure devices introduces substantial risk of model theft.<n>We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits computation between a trusted and an untrusted resource.
- Score: 3.519970512407396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) represent valuable intellectual property (IP), reflecting significant investments in training data, compute, and expertise. Deploying these models on partially trusted or insecure devices introduces substantial risk of model theft, making it essential to design inference protocols with provable security guarantees. We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits model computation between a trusted and an untrusted resource. We define and analyze the key notions of model decomposition and hybrid inference protocols, and introduce formal properties including safety, correctness, efficiency, and t-soundness. We construct secure inference protocols based on additive decompositions of weight matrices, combined with masking and probabilistic verification techniques. We prove that these protocols achieve information-theoretic security against honest-but-curious adversaries, and provide robustness against malicious adversaries with negligible soundness error. This paper focuses on the theoretical underpinnings of SLIP: precise definitions, formal protocols, and proofs of security. Empirical validation and decomposition heuristics appear in the companion SLIP paper. Together, the two works provide a complete account of securing LLM IP via hybrid inference, bridging both practice and theory.
Related papers
- Quantum Key Distribution with Imperfections: Recent Advances in Security Proofs [0.0]
Quantum Key Distribution (QKD) can enable two spatially separated parties to establish an information-theoretically secure encryption.<n>Security proofs robust against a wide range of eavesdropping strategies have established the theoretical soundness of several QKD protocols.<n>Most proofs are based on idealized models of the physical systems involved in such protocols and often include assumptions that are not satisfied in practical implementations.
arXiv Detail & Related papers (2026-02-04T21:16:33Z) - Formal Verification of Physical Layer Security Protocols for Next-Generation Communication Networks (extended version) [1.5997757408973357]
We re-model the Needham-Schroeder protocol using an Isabelle formalism that generates sound animation.<n>Our findings reveal an uncommon but expected outcome: authenticity is preserved across all examined scenarios.<n>We have proposed a PLS-based Diffie-Hellman protocol that integrates watermarking and jamming.
arXiv Detail & Related papers (2025-08-26T20:59:16Z) - Safe Low Bandwidth SPV: A Formal Treatment of Simplified Payment Verification Protocols and Security Bounds [0.0]
We show that SPV is not only secure under bounded adversarial assumptions but strictly optimal for digital cash systems requiring scalable and verifiable transaction inclusion.<n>This document serves both as a blueprint for secure SPV implementation and a rebuttal of common misconceptions surrounding non-validating clients.
arXiv Detail & Related papers (2025-07-01T13:44:48Z) - Formal Verification of Permission Voucher [1.4732811715354452]
The Permission Voucher Protocol is a system designed for secure and authenticated access control in distributed environments.<n>The analysis employs the Tamarin Prover, a state-of-the-art tool for symbolic verification, to evaluate key security properties.<n>Results confirm the protocol's robustness against common attacks such as message tampering, impersonation, and replay.
arXiv Detail & Related papers (2024-12-18T14:11:50Z) - SLIP: Securing LLMs IP Using Weights Decomposition [0.0]
Large language models (LLMs) have recently seen widespread adoption, in both academia and industry.
As these models grow, they become valuable intellectual property (IP), reflecting enormous investments by their owners.
Current methods to protect models' IP on the edge have limitations in terms of practicality, loss in accuracy, or suitability to requirements.
We introduce a novel hybrid inference algorithm, named SLIP, designed to protect edge-deployed models from theft.
arXiv Detail & Related papers (2024-07-15T16:37:55Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - FedZKP: Federated Model Ownership Verification with Zero-knowledge Proof [60.990541463214605]
Federated learning (FL) allows multiple parties to cooperatively learn a federated model without sharing private data with each other.
We propose a provable secure model ownership verification scheme using zero-knowledge proof, named FedZKP.
arXiv Detail & Related papers (2023-05-08T07:03:33Z) - Finite-Size Security for Discrete-Modulated Continuous-Variable Quantum
Key Distribution Protocols [4.58733012283457]
We present a composable finite-size security proof against independently and identically distributed collective attacks for a general DM CV-QKD protocol.
We extend and apply a numerical security proof technique to calculate tight lower bounds on the secure key rate.
Results show that our security proof method yields secure finitesize key rates under experimentally viable conditions up to at least 72km transmission distance.
arXiv Detail & Related papers (2023-01-20T17:16:21Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Byzantine-Robust Federated Learning with Optimal Statistical Rates and
Privacy Guarantees [123.0401978870009]
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates.
We benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
arXiv Detail & Related papers (2022-05-24T04:03:07Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.