Embedding Alignment for Unsupervised Federated Learning via Smart Data
Exchange
- URL: http://arxiv.org/abs/2208.02856v1
- Date: Thu, 4 Aug 2022 19:26:59 GMT
- Title: Embedding Alignment for Unsupervised Federated Learning via Smart Data
Exchange
- Authors: Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Mung
Chiang, Christopher G. Brinton
- Abstract summary: Federated learning (FL) has been recognized as one of the most promising solutions for distributed machine learning (ML)
We develop a novel methodology, Cooperative Federated unsupervised Contrastive Learning (CF-CL), for FL across edge devices with unlabeled datasets.
- Score: 21.789359767103154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been recognized as one of the most promising
solutions for distributed machine learning (ML). In most of the current
literature, FL has been studied for supervised ML tasks, in which edge devices
collect labeled data. Nevertheless, in many applications, it is impractical to
assume existence of labeled data across devices. To this end, we develop a
novel methodology, Cooperative Federated unsupervised Contrastive Learning
(CF-CL), for FL across edge devices with unlabeled datasets. CF-CL employs
local device cooperation where data are exchanged among devices through
device-to-device (D2D) communications to avoid local model bias resulting from
non-independent and identically distributed (non-i.i.d.) local datasets. CF-CL
introduces a push-pull smart data sharing mechanism tailored to unsupervised FL
settings, in which, each device pushes a subset of its local datapoints to its
neighbors as reserved data points, and pulls a set of datapoints from its
neighbors, sampled through a probabilistic importance sampling technique. We
demonstrate that CF-CL leads to (i) alignment of unsupervised learned latent
spaces across devices, (ii) faster global convergence, allowing for less
frequent global model aggregations; and (iii) is effective in extreme
non-i.i.d. data settings across the devices.
Related papers
- Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection [13.825031686864559]
Federated Learning (FL) is a decentralized approach for collaborative model training on edge devices.
We present a method to improve convergence in FL by creating a global subset of data on the server and dynamically distributing it across devices.
Our approach results in a substantial accuracy boost of approximately 5% for the MNIST dataset, around 18% for CIFAR-10, and 20% for CIFAR-100 with a 10% global subset of data, outperforming the state-of-the-art (SOTA) aggregation algorithms.
arXiv Detail & Related papers (2024-10-23T11:47:04Z) - Unsupervised Federated Optimization at the Edge: D2D-Enabled Learning without Labels [14.696896223432507]
Federated learning (FL) is a popular solution for distributed machine learning (ML)
tt CF-CL employs local device cooperation where either explicit (i.e., raw data) or implicit (i.e., embeddings) information is exchanged through device-to-device (D2D) communications.
arXiv Detail & Related papers (2024-04-15T15:17:38Z) - Clustered Data Sharing for Non-IID Federated Learning over Wireless
Networks [39.80420645943706]
Federated Learning (FL) is a distributed machine learning approach to leverage data from the Internet of Things (IoT)
Current FL algorithms face the challenges of non-independent and identically distributed (non-IID) data, which causes high communication costs and model accuracy declines.
We propose a clustered data sharing framework which spares the partial data from cluster heads to credible associates through device-to-device (D2D) communication.
arXiv Detail & Related papers (2023-02-17T07:11:02Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - FedProto: Federated Prototype Learning over Heterogeneous Devices [40.10333186507569]
We propose a novel federated prototype learning (FedProto) framework in which the devices and server communicate the class prototypes instead of the gradients.
FedProto aggregates the local prototypes collected from different devices, and then sends the global prototypes back to all devices to regularize the training of local models.
The training on each device aims to minimize the classification error on the local data while keeping the resulting local prototypes sufficiently close to the corresponding global ones.
arXiv Detail & Related papers (2021-05-01T13:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.