FedMM: Saddle Point Optimization for Federated Adversarial Domain
Adaptation
- URL: http://arxiv.org/abs/2110.08477v1
- Date: Sat, 16 Oct 2021 05:32:03 GMT
- Title: FedMM: Saddle Point Optimization for Federated Adversarial Domain
Adaptation
- Authors: Yan Shen and Jian Du and Hao Zhang and Benyu Zhang and Zhanghexuan Ji
and Mingchen Gao
- Abstract summary: Federated domain adaptation is a unique minimax training task due to the prevalence of label imbalance among clients.
We propose a distributed minimax domain referred to as FedMM, designed specifically for the federated adversary adaptation problem.
We prove that FedMM ensures convergence to a stationary point with domain-shifted unsupervised data.
- Score: 6.3434032890855345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated adversary domain adaptation is a unique distributed minimax
training task due to the prevalence of label imbalance among clients, with each
client only seeing a subset of the classes of labels required to train a global
model. To tackle this problem, we propose a distributed minimax optimizer
referred to as FedMM, designed specifically for the federated adversary domain
adaptation problem. It works well even in the extreme case where each client
has different label classes and some clients only have unsupervised tasks. We
prove that FedMM ensures convergence to a stationary point with domain-shifted
unsupervised data. On a variety of benchmark datasets, extensive experiments
show that FedMM consistently achieves either significant communication savings
or significant accuracy improvements over federated optimizers based on the
gradient descent ascent (GDA) algorithm. When training from scratch, for
example, it outperforms other GDA based federated average methods by around
$20\%$ in accuracy over the same communication rounds; and it consistently
outperforms when training from pre-trained models with an accuracy improvement
from $5.4\%$ to $9\%$ for different networks.
Related papers
- Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Fairness and Accuracy in Federated Learning [17.218814060589956]
This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa)
It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model.
An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed.
arXiv Detail & Related papers (2020-12-18T06:28:37Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.