From Split to Share: Private Inference with Distributed Feature Sharing
- URL: http://arxiv.org/abs/2508.04346v1
- Date: Wed, 06 Aug 2025 11:41:10 GMT
- Title: From Split to Share: Private Inference with Distributed Feature Sharing
- Authors: Zihan Liu, Jiayi Wen, Shouhong Tan, Zhirun Zheng, Cheng Huang,
- Abstract summary: Cloud-based Machine Learning as a Service (ML) raises serious privacy concerns when handling sensitive client data.<n>We propose PrivDFS, a new paradigm for private inference that replaces a single exposed representation with distributed feature sharing.<n>PrivDFS partitions input features on the client into multiple balanced shares, which are distributed to non-colluding, non-communicating servers for independent partial inference.
- Score: 8.103701420776487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloud-based Machine Learning as a Service (MLaaS) raises serious privacy concerns when handling sensitive client data. Existing Private Inference (PI) methods face a fundamental trade-off between privacy and efficiency: cryptographic approaches offer strong protection but incur high computational overhead, while efficient alternatives such as split inference expose intermediate features to inversion attacks. We propose PrivDFS, a new paradigm for private inference that replaces a single exposed representation with distributed feature sharing. PrivDFS partitions input features on the client into multiple balanced shares, which are distributed to non-colluding, non-communicating servers for independent partial inference. The client securely aggregates the servers' outputs to reconstruct the final prediction, ensuring that no single server observes sufficient information to compromise input privacy. To further strengthen privacy, we propose two key extensions: PrivDFS-AT, which uses adversarial training with a diffusion-based proxy attacker to enforce inversion-resistant feature partitioning, and PrivDFS-KD, which leverages user-specific keys to diversify partitioning policies and prevent query-based inversion generalization. Experiments on CIFAR-10 and CelebA demonstrate that PrivDFS achieves privacy comparable to deep split inference while cutting client computation by up to 100 times with no accuracy loss, and that the extensions remain robust against both diffusion-based in-distribution and adaptive attacks.
Related papers
- Privacy-preserving Prompt Personalization in Federated Learning for Multimodal Large Language Models [12.406403248205285]
Federated prompt personalization (FPP) is developed to address data heterogeneity and local overfitting.<n>We propose SecFPP, a secure FPP protocol harmonizing personalization, and privacy guarantees.<n>We show SecFPP significantly outperforms both non-private and privacy-preserving baselines.
arXiv Detail & Related papers (2025-05-28T15:09:56Z) - Federated Learning With Individualized Privacy Through Client Sampling [2.0432201743624456]
We propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL)<n>We calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm.<n>The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility.
arXiv Detail & Related papers (2025-01-29T13:11:21Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
We propose a game-theoretic framework modeling privacy protection as a Bayesian game between defender and attacker.<n>To address strategic complexity, we represent the defender's mixed strategy as a neural network generator mapping private datasets to public representations.<n>Our approach significantly outperforms state-of-the-art methods by generating stronger attacks and achieving better privacy-utility tradeoffs.
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Federated Instruction Tuning of LLMs with Domain Coverage Augmentation [87.49293964617128]
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with various strategies of instruction augmentation.<n>We propose FedDCA, which optimize domain coverage through greedy client center selection and retrieval-based augmentation.<n>For client-side computational efficiency and system scalability, FedDCA$*$, the variant of FedDCA, utilizes heterogeneous encoders with server-side feature alignment.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - Segmented Private Data Aggregation in the Multi-message Shuffle Model [9.298982907061099]
We pioneer the study of segmented private data aggregation within the multi-message shuffle model of differential privacy.<n>Our framework introduces flexible privacy protection for users and enhanced utility for the aggregation server.<n>Our framework achieves a reduction of about 50% in estimation error compared to existing approaches.
arXiv Detail & Related papers (2024-07-29T01:46:44Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Declarative Privacy-Preserving Inference Queries [21.890318255305026]
We propose an end-to-end workflow for automating privacy-preserving inference queries.<n>Our proposed novel declarative privacy-preserving workflow allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting [3.822543555265593]
This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
arXiv Detail & Related papers (2021-02-19T05:55:53Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.