Fault Tolerant Serverless VFL Over Dynamic Device Environment
- URL: http://arxiv.org/abs/2312.16638v2
- Date: Tue, 30 Jul 2024 00:07:01 GMT
- Title: Fault Tolerant Serverless VFL Over Dynamic Device Environment
- Authors: Surojit Ganguli, Zeyu Zhou, Christopher G. Brinton, David I. Inouye,
- Abstract summary: We study the test time performance of Vertical Federated learning (VFL) under dynamic network conditions, which we call DN-VFL.
We develop a novel DN-VFL approach called Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) that synthesizes replication, gossiping, and selective feature omission to improve performance significantly over baselines.
- Score: 15.757660512833006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical Federated learning (VFL) is a class of FL where each client shares the same set of samples but only owns a subset of the features. Usually, VFL assumes perfect hardware and communication capabilities. However, this assumption hinders the broad deployment of VFL, particularly on a network of edge devices, which are heterogeneous in their in-situ capabilities while any device may connect/disconnect from the network over time. To address this gap, we study the test time performance of VFL under dynamic network conditions, which we call DN-VFL. We first formalize DN-VFL, including a message passing distributed inference algorithm, the corresponding risk, and a serverless setup. We develop a novel DN-VFL approach called Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) that synthesizes replication, gossiping, and selective feature omission to improve performance significantly over baselines. Furthermore, we propose metrics and extensively analyze MAGS using a simulated sensor network. The results show that naively using VFL for DN-VFL is not the best approach. Rather, MAGS present a better alternative to handle changes in the network during inference.
Related papers
- Vertical Federated Learning in Practice: The Good, the Bad, and the Ugly [42.31182713177944]
This survey analyzes the real-world data distributions in potential Vertical Federated Learning (VFL) applications.
We propose a novel data-oriented taxonomy of VFL algorithms based on real VFL data distributions.
Based on these observations, we outline key research directions aimed at bridging the gap between current VFL research and real-world applications.
arXiv Detail & Related papers (2025-02-12T07:03:32Z) - Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data [8.759583928626702]
We propose Decoupled VFL (DVFL) to handle training with faults.
DVFL decouples training between communication rounds using local unsupervised objectives.
As secondary benefits, DVFL can enhance data efficiency and provides immunity against gradient-based attacks.
arXiv Detail & Related papers (2024-03-06T17:23:28Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.