FedGBF: An efficient vertical federated learning framework via gradient
boosting and bagging
- URL: http://arxiv.org/abs/2204.00976v1
- Date: Sun, 3 Apr 2022 03:03:34 GMT
- Title: FedGBF: An efficient vertical federated learning framework via gradient
boosting and bagging
- Authors: Yujin Han, Pan Du, Kai Yang
- Abstract summary: We propose a novel model in a vertically federated setting termed Federated Gradient Boosting Forest (FedGBF)
FedGBF simultaneously integrates the boosting and bagging's preponderance by building the decision trees in parallel as a base learner for boosting.
We also propose the Dynamic FedGBF, which dynamically changes each forest's parameters and thus reduces the complexity.
- Score: 14.241194034190304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning, conducive to solving data privacy and security problems,
has attracted increasing attention recently. However, the existing federated
boosting model sequentially builds a decision tree model with the weak base
learner, resulting in redundant boosting steps and high interactive
communication costs. In contrast, the federated bagging model saves time by
building multi-decision trees in parallel, but it suffers from performance
loss. With the aim of obtaining an outstanding performance with less time cost,
we propose a novel model in a vertically federated setting termed as Federated
Gradient Boosting Forest (FedGBF). FedGBF simultaneously integrates the
boosting and bagging's preponderance by building the decision trees in parallel
as a base learner for boosting. Subsequent to FedGBF, the problem of
hyperparameters tuning is rising. Then we propose the Dynamic FedGBF, which
dynamically changes each forest's parameters and thus reduces the complexity.
Finally, the experiments based on the benchmark datasets demonstrate the
superiority of our method.
Related papers
- Killing Two Birds with One Stone: Unifying Retrieval and Ranking with a Single Generative Recommendation Model [71.45491434257106]
Unified Generative Recommendation Framework (UniGRF) is a novel approach that integrates retrieval and ranking into a single generative model.
To enhance inter-stage collaboration, UniGRF introduces a ranking-driven enhancer module.
UniGRF significantly outperforms existing models on benchmark datasets.
arXiv Detail & Related papers (2025-04-23T06:43:54Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.
Non-smooth regularization is often incorporated into machine learning tasks.
We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - When Foresight Pruning Meets Zeroth-Order Optimization: Efficient Federated Learning for Low-Memory Devices [36.23767349592602]
Federated Learning (FL) enables collaborative learning in Artificial Intelligence of Things (AIoT) design.
FL fails to work on low-memory AIoT devices due to its heavy memory usage.
We propose a federated foresight pruning method based on Neural Tangent Kernel (NTK), which can seamlessly integrate with federated BP-Free training frameworks.
arXiv Detail & Related papers (2024-05-08T02:24:09Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - Take History as a Mirror in Heterogeneous Federated Learning [9.187993085263209]
Federated Learning (FL) allows several clients to cooperatively train machine learning models without disclosing the raw data.
In this work, we propose a novel asynchronous FL framework called Federated Historical Learning (FedHist)
FedHist effectively addresses the challenges posed by both Non-IID data and gradient staleness.
arXiv Detail & Related papers (2023-12-16T11:40:49Z) - Lightweight Diffusion Models with Distillation-Based Block Neural
Architecture Search [55.41583104734349]
We propose to automatically remove structural redundancy in diffusion models with our proposed Diffusion Distillation-based Block-wise Neural Architecture Search (NAS)
Given a larger pretrained teacher, we leverage DiffNAS to search for the smallest architecture which can achieve on-par or even better performance than the teacher.
Different from previous block-wise NAS methods, DiffNAS contains a block-wise local search strategy and a retraining strategy with a joint dynamic loss.
arXiv Detail & Related papers (2023-11-08T12:56:59Z) - Federated Learning over Hierarchical Wireless Networks: Training Latency Minimization via Submodel Partitioning [15.311309249848739]
Hierarchical independent submodel training (HIST) is a new FL methodology that aims to address these issues in hierarchical cloud-edge-client networks.
We demonstrate how HIST can be augmented with over-the-air computation (AirComp) to further enhance the efficiency of the model aggregation over the edge cells.
arXiv Detail & Related papers (2023-10-27T04:42:59Z) - Tackling the Non-IID Issue in Heterogeneous Federated Learning by
Gradient Harmonization [11.484136481586381]
Federated learning (FL) is a privacy-preserving paradigm for collaboratively training a global model from decentralized clients.
In this work, we revisit this key challenge through the lens of gradient conflicts on the server side.
We propose FedGH, a simple yet effective method that mitigates local drifts through Gradient Harmonization.
arXiv Detail & Related papers (2023-09-13T03:27:21Z) - Gradient-less Federated Gradient Boosting Trees with Learnable Learning Rates [17.68344542462656]
We develop an innovative framework for horizontal federated XGBoost.
It simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles learnable.
Our approach achieves performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.
arXiv Detail & Related papers (2023-04-15T11:48:18Z) - Federated Hyperparameter Tuning: Challenges, Baselines, and Connections
to Weight-Sharing [37.056834089598105]
We show how standard approaches may be adapted to form baselines for the federated setting.
By making a novel connection to the neural architecture search technique of weight-sharing, we introduce a new method, FedEx.
Theoretically, we show that a FedEx variant correctly tunes the on-device learning rate in the setting of online convex optimization.
arXiv Detail & Related papers (2021-06-08T16:42:37Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - FederBoost: Private Federated Learning for GBDT [45.903895659670674]
Federated Learning (FL) has been an emerging trend in machine learning and artificial intelligence.
We propose a framework named FederBoost for private federated learning of gradient boosting decision trees (GBDT)
arXiv Detail & Related papers (2020-11-05T13:05:12Z) - Soft Gradient Boosting Machine [72.54062017726154]
We propose the soft Gradient Boosting Machine (sGBM) by wiring multiple differentiable base learners together.
Experimental results showed that, sGBM enjoys much higher time efficiency with better accuracy, given the same base learner in both on-line and off-line settings.
arXiv Detail & Related papers (2020-06-07T06:43:23Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.