Decentralized adaptive clustering of deep nets is beneficial for client
collaboration
- URL: http://arxiv.org/abs/2206.08839v1
- Date: Fri, 17 Jun 2022 15:38:31 GMT
- Title: Decentralized adaptive clustering of deep nets is beneficial for client
collaboration
- Authors: Edvin Listo Zec, Ebba Ekblom, Martin Willbo, Olof Mogren and Sarunas
Girdzijauskas
- Abstract summary: We study the problem of training personalized deep learning models in a decentralized peer-to-peer setting.
Our contribution is an algorithm which for each client finds beneficial collaborations based on a similarity estimate for the local task.
- Score: 0.7012240324005975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of training personalized deep learning models in a
decentralized peer-to-peer setting, focusing on the setting where data
distributions differ between the clients and where different clients have
different local learning tasks. We study both covariate and label shift, and
our contribution is an algorithm which for each client finds beneficial
collaborations based on a similarity estimate for the local task. Our method
does not rely on hyperparameters which are hard to estimate, such as the number
of client clusters, but rather continuously adapts to the network topology
using soft cluster assignment based on a novel adaptive gossip algorithm. We
test the proposed method in various settings where data is not independent and
identically distributed among the clients. The experimental evaluation shows
that the proposed method performs better than previous state-of-the-art
algorithms for this problem setting, and handles situations well where previous
methods fail.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Federated Two Stage Decoupling With Adaptive Personalization Layers [5.69361786082969]
Federated learning has gained significant attention due to its ability to enable distributed learning while maintaining privacy constraints.
It inherently experiences significant learning degradation and slow convergence speed.
It is natural to employ the concept of clustering homogeneous clients into the same group, allowing only the model weights within each group to be aggregated.
arXiv Detail & Related papers (2023-08-30T07:46:32Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - Provably Personalized and Robust Federated Learning [47.50663360022456]
We propose simple algorithms which identify clusters of similar clients and train a personalized modelper-cluster.
The convergence rates of our algorithmsally match those obtained if we knew the true underlying clustering of the clients and are provably robust in the Byzantine setting.
arXiv Detail & Related papers (2023-06-14T09:37:39Z) - Personalized Decentralized Federated Learning with Knowledge
Distillation [5.469841541565307]
Personalization in federated learning functions as a coordinator for clients with high variance in data or behavior.
It is generally challenging to quantify similarity under limited knowledge about other users' models given to users in a decentralized network.
We propose a personalized and fully decentralized FL algorithm, leveraging knowledge distillation techniques to empower each device so as to discern statistical distances between local models.
arXiv Detail & Related papers (2023-02-23T16:41:07Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.