FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning
- URL: http://arxiv.org/abs/2102.02514v1
- Date: Thu, 4 Feb 2021 09:53:53 GMT
- Title: FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning
- Authors: Felix Sattler and Tim Korjakow and Roman Rischke and Wojciech Samek
- Abstract summary: Federated Distillation (FD) is a popular novel algorithmic paradigm for Federated Learning.
We propose FedAUX, which drastically improves performance by deriving maximum utility from the unlabeled auxiliary data.
Experiments on large-scale convolutional neural networks and transformer models demonstrate, that the training performance of FedAUX exceeds SOTA FL baseline methods.
- Score: 14.10627556244287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Distillation (FD) is a popular novel algorithmic paradigm for
Federated Learning, which achieves training performance competitive to prior
parameter averaging based methods, while additionally allowing the clients to
train different model architectures, by distilling the client predictions on an
unlabeled auxiliary set of data into a student model. In this work we propose
FedAUX, an extension to FD, which, under the same set of assumptions,
drastically improves performance by deriving maximum utility from the unlabeled
auxiliary data. FedAUX modifies the FD training procedure in two ways: First,
unsupervised pre-training on the auxiliary data is performed to find a model
initialization for the distributed training. Second, $(\varepsilon,
\delta)$-differentially private certainty scoring is used to weight the
ensemble predictions on the auxiliary data according to the certainty of each
client model. Experiments on large-scale convolutional neural networks and
transformer models demonstrate, that the training performance of FedAUX exceeds
SOTA FL baseline methods by a substantial margin in both the iid and non-iid
regime, further closing the gap to centralized training performance. Code is
available at github.com/fedl-repo/fedaux.
Related papers
- FedMS: Federated Learning with Mixture of Sparsely Activated Foundations
Models [11.362085734837217]
We propose a novel two-stage federated learning algorithm called FedMS.
A global expert is trained in the first stage and a local expert is trained in the second stage to provide better personalization.
We employ extensive experiments to verify the effectiveness of FedMS, results show that FedMS outperforms other SOTA baselines by up to 55.25% in default settings.
arXiv Detail & Related papers (2023-12-26T07:40:26Z) - FedFNN: Faster Training Convergence Through Update Predictions in
Federated Recommender Systems [4.4273123155989715]
Federated Learning (FL) has emerged as a key approach for distributed machine learning.
This paper introduces FedFNN, an algorithm that accelerates decentralized model training.
arXiv Detail & Related papers (2023-09-14T13:18:43Z) - FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal
Heterogeneous Federated Learning [37.96957782129352]
We propose a finetuning framework tailored to heterogeneous multi-modal foundation models, called Federated Dual-Aadapter Teacher (Fed DAT)
Fed DAT addresses data heterogeneity by regularizing the client local updates and applying Mutual Knowledge Distillation (MKD) for an efficient knowledge transfer.
To demonstrate its effectiveness, we conduct extensive experiments on four multi-modality FL benchmarks with different types of data heterogeneity.
arXiv Detail & Related papers (2023-08-21T21:57:01Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Weight Divergence Driven Divide-and-Conquer Approach for Optimal
Federated Learning from non-IID Data [0.0]
Federated Learning allows training of data stored in distributed devices without the need for centralizing training data.
We propose a novel Divide-and-Conquer training methodology that enables the use of the popular FedAvg aggregation algorithm.
arXiv Detail & Related papers (2021-06-28T09:34:20Z) - Fairness and Accuracy in Federated Learning [17.218814060589956]
This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa)
It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model.
An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed.
arXiv Detail & Related papers (2020-12-18T06:28:37Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.