DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering
- URL: http://arxiv.org/abs/2111.04105v1
- Date: Sun, 7 Nov 2021 15:14:29 GMT
- Title: DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering
- Authors: Mohsen Ahmadi, Ali Taghavirashidizadeh, Danial Javaheri, Armin
Masoumian, Saeid Jafarzadeh Ghoushchi, Yaghoub Pourasad
- Abstract summary: Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
- Score: 1.174402845822043
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning models based on sensitive data in the real-world promise
advances in areas ranging from medical screening to disease outbreaks,
agriculture, industry, defense science, and more. In many applications,
learning participant communication rounds benefit from collecting their own
private data sets, teaching detailed machine learning models on the real data,
and sharing the benefits of using these models. Due to existing privacy and
security concerns, most people avoid sensitive data sharing for training.
Without each user demonstrating their local data to a central server, Federated
Learning allows various parties to train a machine learning algorithm on their
shared data jointly. This method of collective privacy learning results in the
expense of important communication during training. Most large-scale
machine-learning applications require decentralized learning based on data sets
generated on various devices and places. Such datasets represent an essential
obstacle to decentralized learning, as their diverse contexts contribute to
significant differences in the delivery of data across devices and locations.
Researchers have proposed several ways to achieve data privacy in Federated
Learning systems. However, there are still challenges with homogeneous local
data. This research approach is to select nodes (users) to share their data in
Federated Learning for independent data-based equilibrium to improve accuracy,
reduce training time, and increase convergence. Therefore, this research
presents a combined Deep-QReinforcement Learning Ensemble based on Spectral
Clustering called DQRE-SCnet to choose a subset of devices in each
communication round. Based on the results, it has been displayed that it is
possible to decrease the number of communication rounds needed in Federated
Learning.
Related papers
- A Novel Neural Network-Based Federated Learning System for Imbalanced
and Non-IID Data [2.9642661320713555]
Most machine learning algorithms rely heavily on large amount of data which may be collected from various sources.
To combat this issue, researchers have introduced federated learning, where a prediction model is learnt by ensuring the privacy of data of clients data.
In this research, we propose a centralized, neural network-based federated learning system.
arXiv Detail & Related papers (2023-11-16T17:14:07Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Comparative assessment of federated and centralized machine learning [0.0]
Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices.
In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data.
We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large.
arXiv Detail & Related papers (2022-02-03T11:20:47Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - Federated Learning Versus Classical Machine Learning: A Convergence
Comparison [7.730827805192975]
In the past few decades, machine learning has revolutionized data processing for large scale applications.
In particular, the federated learning allows participants to collaboratively train the local models on local data without revealing their sensitive information to the central cloud server.
The simulation results demonstrate that federated learning achieves higher convergence within limited communication rounds while maintaining participants' anonymity.
arXiv Detail & Related papers (2021-07-22T17:14:35Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.