DPZV: Resource Efficient ZO Optimization For Differentially Private VFL
- URL: http://arxiv.org/abs/2502.20565v1
- Date: Thu, 27 Feb 2025 22:07:16 GMT
- Title: DPZV: Resource Efficient ZO Optimization For Differentially Private VFL
- Authors: Jianing Zhang, Evan Chen, Chaoyue Liu, Christopher G. Brinton,
- Abstract summary: Vertical Federated Learning (VFL) enables collaborative model computation across feature-partitioned data, yet faces privacy risks when scaling to large models.<n>We propose DPZV, a memory- Zeroth-efficient(DP) framework that integrates differential privacy.<n>Our framework eliminates backpagation through two-point estimation, reducing client memory usage by 90% compared to first-order counterparts.
- Score: 9.302691218735406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical Federated Learning (VFL) enables collaborative model training across feature-partitioned data, yet faces significant privacy risks and inefficiencies when scaling to large models. We propose DPZV, a memory-efficient Zeroth-Order(ZO) optimization framework that integrates differential privacy (DP) with vertical federated learning, addressing three critical challenges: (1) privacy vulnerabilities from gradient leakage, (2) high computation/communication costs of first-order methods, and (3) excessive memory footprint in conventional zeroth-order approaches. Our framework eliminates backpropagation through two-point gradient estimation, reducing client memory usage by 90\% compared to first-order counterparts while enabling asynchronous communication. By strategically injecting Gaussian noise on the server, DPZV achieves rigorous $(\epsilon, \delta)$-DP guarantees without third-party trust assumptions. Theoretical analysis establishes a convergence rate matching centralized case under non-convex objectives. Extensive experiments on image and NLP benchmarks demonstrate that DPZV outperforms all baselines in accuracy while providing strong privacy assurances ($\epsilon \leq 10$) and requiring far fewer computation resources, establishing new state-of-the-art privacy-utility tradeoffs for resource-constrained VFL deployments.
Related papers
- Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.<n>Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.<n>We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Providing Differential Privacy for Federated Learning Over Wireless: A Cross-layer Framework [19.381425127772054]
Federated Learning (FL) is a distributed machine learning framework that inherently allows edge devices to maintain their local training data.<n>We propose a wireless physical layer (PHY) design for OTA-FL which improves differential privacy (DP) through a decentralized, dynamic power control.<n>This adaptation showcases the flexibility and effectiveness of our design across different learning algorithms while maintaining a strong emphasis on privacy.
arXiv Detail & Related papers (2024-12-05T18:27:09Z) - Share Your Secrets for Privacy! Confidential Forecasting with Vertical Federated Learning [5.584904689846748]
Key challenges to address in manufacturing include data privacy and over-fitting on small and noisy datasets.
We propose 'Secret-shared Time Series Forecasting with VFL', a novel framework that exhibits the following key features.
Our results demonstrate that STV's forecasting accuracy is comparable to those of centralized approaches.
arXiv Detail & Related papers (2024-05-31T12:27:38Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - sqSGD: Locally Private and Communication Efficient Federated Learning [14.60645909629309]
Federated learning (FL) is a technique that trains machine learning models from decentralized data sources.
We develop a gradient-based learning algorithm called sqSGD that addresses communication efficiency and high-dimensional compatibility.
Experiment results show sqSGD successfully learns large models like LeNet and ResNet with local privacy constraints.
arXiv Detail & Related papers (2022-06-21T17:45:35Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.