Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates
- URL: http://arxiv.org/abs/2207.14628v1
- Date: Fri, 29 Jul 2022 12:10:36 GMT
- Title: Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates
- Authors: Fangcheng Fu, Xupeng Miao, Jiawei Jiang, Huanran Xue, Bin Cui
- Abstract summary: We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
- Score: 25.85564668511386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical federated learning (VFL) is an emerging paradigm that allows
different parties (e.g., organizations or enterprises) to collaboratively build
machine learning models with privacy protection. In the training phase, VFL
only exchanges the intermediate statistics, i.e., forward activations and
backward derivatives, across parties to compute model gradients. Nevertheless,
due to its geo-distributed nature, VFL training usually suffers from the low
WAN bandwidth.
In this paper, we introduce CELU-VFL, a novel and efficient VFL training
framework that exploits the local update technique to reduce the cross-party
communication rounds. CELU-VFL caches the stale statistics and reuses them to
estimate model gradients without exchanging the ad hoc statistics. Significant
techniques are proposed to improve the convergence performance. First, to
handle the stochastic variance problem, we propose a uniform sampling strategy
to fairly choose the stale statistics for local updates. Second, to harness the
errors brought by the staleness, we devise an instance weighting mechanism that
measures the reliability of the estimated gradients. Theoretical analysis
proves that CELU-VFL achieves a similar sub-linear convergence rate as vanilla
VFL training but requires much fewer communication rounds. Empirical results on
both public and real-world workloads validate that CELU-VFL can be up to six
times faster than the existing works.
Related papers
- AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks [15.195798715517315]
Split federated learning (SFL) is a promising solution by of floading the primary training workload to a server via model partitioning.
We propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems.
arXiv Detail & Related papers (2024-03-19T19:05:24Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Secure and Fast Asynchronous Vertical Federated Learning via Cascaded
Hybrid Optimization [18.619236705579713]
We propose a cascaded hybrid optimization method in Vertical Federated Learning (VFL)
In this method, the downstream models (clients) are trained with zeroth-order optimization (ZOO) to protect privacy.
We show that our method achieves faster convergence than the ZOO-based VFL framework, while maintaining an equivalent level of privacy protection.
arXiv Detail & Related papers (2023-06-28T10:18:08Z) - Hierarchical Personalized Federated Learning Over Massive Mobile Edge
Computing Networks [95.39148209543175]
We propose hierarchical PFL (HPFL), an algorithm for deploying PFL over massive MEC networks.
HPFL combines the objectives of training loss minimization and round latency minimization while jointly determining the optimal bandwidth allocation.
arXiv Detail & Related papers (2023-03-19T06:00:05Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - DVFL: A Vertical Federated Learning Method for Dynamic Data [2.406222636382325]
This paper studies vertical federated learning (VFL), which tackles the scenarios where collaborating organizations share the same set of users but disjoint features.
We propose a new vertical federation learning method, DVFL, which adapts to dynamic data distribution changes through knowledge distillation.
Our extensive experimental results show that DVFL can not only obtain results close to existing VFL methods in static scenes, but also adapt to changes in data distribution in dynamic scenarios.
arXiv Detail & Related papers (2021-11-05T09:26:09Z) - Achieving Model Fairness in Vertical Federated Learning [47.8598060954355]
Vertical federated learning (VFL) enables multiple enterprises possessing non-overlapped features to strengthen their machine learning models without disclosing their private data and model parameters.
VFL suffers from fairness issues, i.e., the learned model may be unfairly discriminatory over the group with sensitive attributes.
We propose a fair VFL framework to tackle this problem.
arXiv Detail & Related papers (2021-09-17T04:40:11Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - Gradient Statistics Aware Power Control for Over-the-Air Federated
Learning [59.40860710441232]
Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks.
This paper studies the power control problem for over-the-air FL by taking gradient statistics into account.
arXiv Detail & Related papers (2020-03-04T14:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.