Collaborative Inference over Wireless Channels with Feature Differential Privacy
- URL: http://arxiv.org/abs/2410.19917v1
- Date: Fri, 25 Oct 2024 18:11:02 GMT
- Title: Collaborative Inference over Wireless Channels with Feature Differential Privacy
- Authors: Mohamed Seif, Yuqi Nie, Andrea J. Goldsmith, H. Vincent Poor,
- Abstract summary: Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
- Score: 57.68286389879283
- License:
- Abstract: Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications, particularly for sensing and computer vision. This approach typically involves a three-stage process: a) data acquisition through sensing, b) feature extraction, and c) feature encoding for transmission. However, transmitting the extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process. To address this challenge, we propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference. Our approach is designed to achieve two primary objectives: 1) reducing communication overhead and 2) ensuring strict privacy guarantees during feature transmission, while maintaining effective inference performance. Additionally, we introduce an over-the-air pooling scheme specifically designed for classification tasks, which provides formal guarantees on the privacy of transmitted features and establishes a lower bound on classification accuracy.
Related papers
- Privacy-Preserving Hybrid Ensemble Model for Network Anomaly Detection: Balancing Security and Data Protection [6.5920909061458355]
We propose a hybrid ensemble model that incorporates privacy-preserving techniques to address both detection accuracy and data protection.
Our model combines the strengths of several machine learning algo- rithms, including K-Nearest Neighbors (KNN), Support Vector Machines (SVM), XGBoost, and Artificial Neural Networks (ANN)
arXiv Detail & Related papers (2025-02-13T06:33:16Z) - Facial Expression Recognition with Controlled Privacy Preservation and Feature Compensation [24.619279669211842]
Facial expression recognition (FER) systems raise significant privacy concerns due to the potential exposure of sensitive identity information.
This paper presents a study on removing identity information while preserving FER capabilities.
We introduce a controlled privacy enhancement mechanism to optimize performance and a feature compensator to enhance task-relevant features without compromising privacy.
arXiv Detail & Related papers (2024-11-29T23:12:38Z) - Inference Privacy: Properties and Mechanisms [8.471466670802817]
Inference Privacy (IP) can allow a user to interact with a model while providing a rigorous privacy guarantee for the users' data at inference.
We present two types of mechanisms for achieving IP: namely, input perturbations and output perturbations which are customizable by the users.
arXiv Detail & Related papers (2024-11-27T20:47:28Z) - Over-the-Air Collaborative Inference with Feature Differential Privacy [8.099700053397278]
Collaborative inference can enhance Artificial Intelligence (AI) applications, including autonomous driving, personal identification, and activity classification.
The transmission of extracted features entails the potential risk of exposing sensitive personal data.
New privacy-protecting collaborative inference mechanism is developed.
arXiv Detail & Related papers (2024-06-01T01:39:44Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Flexible Differentially Private Vertical Federated Learning with
Adaptive Feature Embeddings [24.36847069007795]
Vertical federated learning (VFL) has stimulated concerns about the imperfection in privacy protection.
This paper studies the delicate equilibrium between data privacy and task utility goals of VFL under differential privacy (DP)
We propose a flexible and generic approach that decouples the two goals and addresses them successively.
arXiv Detail & Related papers (2023-07-26T04:40:51Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.