Flexible Differentially Private Vertical Federated Learning with
Adaptive Feature Embeddings
- URL: http://arxiv.org/abs/2308.02362v1
- Date: Wed, 26 Jul 2023 04:40:51 GMT
- Title: Flexible Differentially Private Vertical Federated Learning with
Adaptive Feature Embeddings
- Authors: Yuxi Mi, Hongquan Liu, Yewei Xia, Yiheng Sun, Jihong Guan, Shuigeng
Zhou
- Abstract summary: Vertical federated learning (VFL) has stimulated concerns about the imperfection in privacy protection.
This paper studies the delicate equilibrium between data privacy and task utility goals of VFL under differential privacy (DP)
We propose a flexible and generic approach that decouples the two goals and addresses them successively.
- Score: 24.36847069007795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of vertical federated learning (VFL) has stimulated concerns
about the imperfection in privacy protection, as shared feature embeddings may
reveal sensitive information under privacy attacks. This paper studies the
delicate equilibrium between data privacy and task utility goals of VFL under
differential privacy (DP). To address the generality issue of prior arts, this
paper advocates a flexible and generic approach that decouples the two goals
and addresses them successively. Specifically, we initially derive a rigorous
privacy guarantee by applying norm clipping on shared feature embeddings, which
is applicable across various datasets and models. Subsequently, we demonstrate
that task utility can be optimized via adaptive adjustments on the scale and
distribution of feature embeddings in an accuracy-appreciative way, without
compromising established DP mechanisms. We concretize our observation into the
proposed VFL-AFE framework, which exhibits effectiveness against privacy
attacks and the capacity to retain favorable task utility, as substantiated by
extensive experiments.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence [22.946928984205588]
Differentially private federated learning (DP-FL) is a promising technique for collaborative model training.
We propose the first DP-FL framework (namely UDP-FL) which universally harmonizes any randomization mechanism.
We show that UDP-FL exhibits substantial resilience against different inference attacks.
arXiv Detail & Related papers (2024-07-20T00:11:59Z) - FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation [26.617708498454743]
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.
We propose a novel adaptive obfuscation mechanism, coined FedAdOb, to protect private data without yielding original model performances.
arXiv Detail & Related papers (2024-06-03T08:12:09Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - FedPass: Privacy-Preserving Vertical Federated Deep Learning with
Adaptive Obfuscation [14.008415333848802]
Vertical federated learning (VFL) allows an active party with labeled feature to leverage auxiliary features from the passive parties to improve model performance.
Concerns about the private feature and label leakage in both the training and inference phases of VFL have drawn wide research attention.
We propose a general privacy-preserving vertical federated deep learning framework called FedPass, which leverages adaptive obfuscation to protect the feature and label simultaneously.
arXiv Detail & Related papers (2023-01-30T02:36:23Z) - Privacy-Preserving Distributed Expectation Maximization for Gaussian
Mixture Model using Subspace Perturbation [4.2698418800007865]
federated learning is motivated by the privacy concern as it does not allow to transmit private data but only intermediate updates.
We propose a fully decentralized privacy-preserving solution, which is able to securely compute the updates in each step.
Numerical validation shows that the proposed approach has superior performance compared to the existing approach in terms of both the accuracy and privacy level.
arXiv Detail & Related papers (2022-09-16T09:58:03Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.