FedCostWAvg: A new averaging for better Federated Learning
- URL: http://arxiv.org/abs/2111.08649v1
- Date: Tue, 16 Nov 2021 17:31:58 GMT
- Title: FedCostWAvg: A new averaging for better Federated Learning
- Authors: Leon M\"achler, Ivan Ezhov, Florian Kofler, Suprosanna Shit, Johannes
C. Paetzold, Timo Loehr, Benedikt Wiestler, Bjoern Menze
- Abstract summary: We propose a new aggregation strategy for federated learning that won the MICCAI Federated Tumor Challenge 2021.
Our method addresses the problem of how to aggregate multiple models that were trained on different data sets.
- Score: 1.1245087602142634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple new aggregation strategy for federated learning that won
the MICCAI Federated Tumor Segmentation Challenge 2021 (FETS), the first ever
challenge on Federated Learning in the Machine Learning community. Our method
addresses the problem of how to aggregate multiple models that were trained on
different data sets. Conceptually, we propose a new way to choose the weights
when averaging the different models, thereby extending the current state of the
art (FedAvg). Empirical validation demonstrates that our approach reaches a
notable improvement in segmentation performance compared to FedAvg.
Related papers
- Enhanced Few-Shot Class-Incremental Learning via Ensemble Models [34.84881941101568]
Few-shot class-incremental learning aims to continually fit new classes with limited training data.
The main challenges are overfitting the rare new training samples and forgetting old classes.
We propose a new ensemble model framework cooperating with data augmentation to boost generalization.
arXiv Detail & Related papers (2024-01-14T06:07:07Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Guiding The Last Layer in Federated Learning with Pre-Trained Models [18.382057374270143]
Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data.
We show that fitting a classification head using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals.
arXiv Detail & Related papers (2023-06-06T18:02:02Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Team Cogitat at NeurIPS 2021: Benchmarks for EEG Transfer Learning
Competition [55.34407717373643]
Building subject-independent deep learning models for EEG decoding faces the challenge of strong co-shift.
Our approach is to explicitly align feature distributions at various layers of the deep learning model.
The methodology won first place in the 2021 Benchmarks in EEG Transfer Learning competition, hosted at the NeurIPS conference.
arXiv Detail & Related papers (2022-02-01T11:11:08Z) - Minimax Demographic Group Fairness in Federated Learning [23.1988909029387]
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models.
We study minimax group fairness in federated learning scenarios where different participating entities may only have access to a subset of the population groups during the training phase.
We experimentally compare the proposed approach against other state-of-the-art methods in terms of group fairness in various federated learning setups.
arXiv Detail & Related papers (2022-01-20T17:13:54Z) - WAFFLE: Weighted Averaging for Personalized Federated Learning [38.241216472571786]
We introduce WAFFLE, a personalized collaborative machine learning algorithm based on SCAFFOLD.
WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions.
Our experiments demonstrate the effectiveness of WAFFLE compared with other methods.
arXiv Detail & Related papers (2021-10-13T18:40:54Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.