Reinforcement Federated Learning Method Based on Adaptive OPTICS
Clustering
- URL: http://arxiv.org/abs/2306.12859v2
- Date: Fri, 23 Jun 2023 03:25:56 GMT
- Title: Reinforcement Federated Learning Method Based on Adaptive OPTICS
Clustering
- Authors: Tianyu Zhao, Junping Du, Yingxia Shao, and Zeli Guan
- Abstract summary: This paper proposes an adaptive OPTICS clustering algorithm for federated learning.
By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster.
The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
- Score: 19.73560248813166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed machine learning technology, which
realizes the balance between data privacy protection and data sharing
computing. To protect data privacy, feder-ated learning learns shared models by
locally executing distributed training on participating devices and aggregating
local models into global models. There is a problem in federated learning, that
is, the negative impact caused by the non-independent and identical
distribu-tion of data across different user terminals. In order to alleviate
this problem, this paper pro-poses a strengthened federation aggregation method
based on adaptive OPTICS clustering. Specifically, this method perceives the
clustering environment as a Markov decision process, and models the adjustment
process of parameter search direction, so as to find the best clus-tering
parameters to achieve the best federated aggregation method. The core
contribution of this paper is to propose an adaptive OPTICS clustering
algorithm for federated learning. The algorithm combines OPTICS clustering and
adaptive learning technology, and can effective-ly deal with the problem of
non-independent and identically distributed data across different user
terminals. By perceiving the clustering environment as a Markov decision
process, the goal is to find the best parameters of the OPTICS cluster without
artificial assistance, so as to obtain the best federated aggregation method
and achieve better performance. The reliability and practicability of this
method have been verified on the experimental data, and its effec-tiveness and
superiority have been proved.
Related papers
- Personalized Federated Learning for Cross-view Geo-localization [49.40531019551957]
We propose a methodology combining Federated Learning (FL) with Cross-view Image Geo-localization (CVGL) techniques.
Our method implements a coarse-to-fine approach, where clients share only the coarse feature extractors while keeping fine-grained features specific to local environments.
Results demonstrate that our federated CVGL method achieves performance close to centralized training while maintaining data privacy.
arXiv Detail & Related papers (2024-11-07T13:25:52Z) - Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach [27.86468387141422]
Federated Edge Learning (FEEL) emerges as a pioneering distributed machine learning paradigm for the 6G Hyper-Connectivity.
Current FEEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy.
We introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates.
Experiments show that the proposed framework facilitates FEEL on non-IID datasets with faster convergence rate and higher model accuracy in a limited communication environment.
arXiv Detail & Related papers (2024-06-14T07:22:39Z) - FedAC: An Adaptive Clustered Federated Learning Framework for Heterogeneous Data [21.341280782748278]
Clustered federated learning (CFL) is proposed to mitigate the performance deterioration stemming from data heterogeneity inFL.
We propose an adaptive CFL framework, named FedAC, which efficiently integrates global knowledge into intra-cluster learning.
Experiments show that FedAC achieves superior empirical performance, increasing the test accuracy by around 1.82% and 12.67%.
arXiv Detail & Related papers (2024-03-25T06:43:28Z) - Efficient Cluster Selection for Personalized Federated Learning: A
Multi-Armed Bandit Approach [2.5477011559292175]
Federated learning (FL) offers a decentralized training approach for machine learning models, prioritizing data privacy.
In this paper, we introduce a dynamic Upper Confidence Bound (dUCB) algorithm inspired by the multi-armed bandit (MAB) approach.
arXiv Detail & Related papers (2023-10-29T16:46:50Z) - Federated Two Stage Decoupling With Adaptive Personalization Layers [5.69361786082969]
Federated learning has gained significant attention due to its ability to enable distributed learning while maintaining privacy constraints.
It inherently experiences significant learning degradation and slow convergence speed.
It is natural to employ the concept of clustering homogeneous clients into the same group, allowing only the model weights within each group to be aggregated.
arXiv Detail & Related papers (2023-08-30T07:46:32Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Auction Based Clustered Federated Learning in Mobile Edge Computing
System [13.710325615076687]
Federated learning is a distributed machine learning solution that uses local computing and local data to train the Artificial Intelligence (AI) model.
We propose a cluster-based clients selection method that can generate a federated virtual dataset that satisfies the global distribution.
We show that our proposed selection methods and auction-based federated learning can achieve better performance with the Convolutional Neural Network model (CNN) under different data distributions.
arXiv Detail & Related papers (2021-03-12T08:54:27Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.