Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data
- URL: http://arxiv.org/abs/2403.03871v2
- Date: Sun, 01 Dec 2024 18:13:30 GMT
- Title: Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data
- Authors: Avi Amalanshu, Yash Sirvi, David I. Inouye,
- Abstract summary: We propose Decoupled VFL (DVFL) to handle training with faults.
DVFL decouples training between communication rounds using local unsupervised objectives.
As secondary benefits, DVFL can enhance data efficiency and provides immunity against gradient-based attacks.
- Score: 8.759583928626702
- License:
- Abstract: Vertical Federated Learning (VFL) is an emergent distributed machine learning paradigm for collaborative learning between clients who have disjoint features of common entities. However, standard VFL lacks fault tolerance, with each participant and connection being a single point of failure. Prior attempts to induce fault tolerance in VFL focus on the scenario of "straggling clients", usually entailing that all messages eventually arrive or that there is an upper bound on the number of late messages. To handle the more general problem of arbitrary crashes, we propose Decoupled VFL (DVFL). To handle training with faults, DVFL decouples training between communication rounds using local unsupervised objectives. By further decoupling label supervision from aggregation, DVFL also enables redundant aggregators. As secondary benefits, DVFL can enhance data efficiency and provides immunity against gradient-based attacks. In this work, we implement DVFL for split neural networks with a self-supervised autoencoder loss. When there are faults, DVFL outperforms the best VFL-based alternative (97.58% vs 96.95% on an MNIST task). Even under perfect conditions, performance is comparable.
Related papers
- Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning [22.076364118223324]
We propose a novel backdoor attack on vertical Federated Learning (VFL)
Our label inference model augments variational autoencoders with metric learning, which adversaries can train locally.
Our convergence analysis reveals the impact of backdoor perturbations on VFL indicated by a stationarity gap for the trained model.
arXiv Detail & Related papers (2025-01-16T06:22:35Z) - FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters.
Global thresholds are used to update model parameters by extracting aggregated parameter importance.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Fault Tolerant Serverless VFL Over Dynamic Device Environment [15.757660512833006]
We study the test time performance of Vertical Federated learning (VFL) under dynamic network conditions, which we call DN-VFL.
We develop a novel DN-VFL approach called Multiple Aggregation with Gossip Rounds and Simulated Faults (MAGS) that synthesizes replication, gossiping, and selective feature omission to improve performance significantly over baselines.
arXiv Detail & Related papers (2023-12-27T17:00:09Z) - BadVFL: Backdoor Attacks in Vertical Federated Learning [22.71527711053385]
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data.
In this paper, we focus on robustness in VFL, in particular, on backdoor attacks.
We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase.
arXiv Detail & Related papers (2023-04-18T09:22:32Z) - A Fast Blockchain-based Federated Learning Framework with Compressed
Communications [14.344080339573278]
Recently, blockchain-based federated learning (BFL) has attracted intensive research attention.
In this paper, we propose a fast-based BFL called BCFL to improve the training efficiency of BFL in reality.
arXiv Detail & Related papers (2022-08-12T03:04:55Z) - Low-Latency Cooperative Spectrum Sensing via Truncated Vertical
Federated Learning [51.51440623636274]
We propose a vertical federated learning (VFL) framework to exploit the distributed features across multiple secondary users (SUs) without compromising data privacy.
To accelerate the training process, we propose a truncated vertical federated learning (T-VFL) algorithm.
The convergence performance of T-VFL is provided via mathematical analysis and justified by simulation results.
arXiv Detail & Related papers (2022-08-07T10:39:27Z) - Towards Communication-efficient Vertical Federated Learning Training via
Cache-enabled Local Updates [25.85564668511386]
We introduce CELU-VFL, a novel and efficient Vertical Learning framework.
CELU-VFL exploits the local update technique to reduce the cross-party communication rounds.
We show that CELU-VFL can be up to six times faster than the existing works.
arXiv Detail & Related papers (2022-07-29T12:10:36Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Achieving Model Fairness in Vertical Federated Learning [47.8598060954355]
Vertical federated learning (VFL) enables multiple enterprises possessing non-overlapped features to strengthen their machine learning models without disclosing their private data and model parameters.
VFL suffers from fairness issues, i.e., the learned model may be unfairly discriminatory over the group with sensitive attributes.
We propose a fair VFL framework to tackle this problem.
arXiv Detail & Related papers (2021-09-17T04:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.