Exploiting Features and Logits in Heterogeneous Federated Learning
- URL: http://arxiv.org/abs/2210.15527v1
- Date: Thu, 27 Oct 2022 15:11:46 GMT
- Title: Exploiting Features and Logits in Heterogeneous Federated Learning
- Authors: Yun-Hin Chan, Edith C.-H. Ngai
- Abstract summary: Federated learning (FL) facilitates the management of edge devices to collaboratively train a shared model.
We propose a novel data-free FL method that supports heterogeneous client models by managing features and logits.
Unlike Felo, the server has a conditional VAE in Velo, which is used for training mid-level features and generating synthetic features according to the labels.
- Score: 0.2538209532048866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the rapid growth of IoT and artificial intelligence, deploying neural
networks on IoT devices is becoming increasingly crucial for edge intelligence.
Federated learning (FL) facilitates the management of edge devices to
collaboratively train a shared model while maintaining training data local and
private. However, a general assumption in FL is that all edge devices are
trained on the same machine learning model, which may be impractical
considering diverse device capabilities. For instance, less capable devices may
slow down the updating process because they struggle to handle large models
appropriate for ordinary devices. In this paper, we propose a novel data-free
FL method that supports heterogeneous client models by managing features and
logits, called Felo; and its extension with a conditional VAE deployed in the
server, called Velo. Felo averages the mid-level features and logits from the
clients at the server based on their class labels to provide the average
features and logits, which are utilized for further training the client models.
Unlike Felo, the server has a conditional VAE in Velo, which is used for
training mid-level features and generating synthetic features according to the
labels. The clients optimize their models based on the synthetic features and
the average logits. We conduct experiments on two datasets and show
satisfactory performances of our methods compared with the state-of-the-art
methods.
Related papers
- CRSFL: Cluster-based Resource-aware Split Federated Learning for Continuous Authentication [5.636155173401658]
Split Learning (SL) and Federated Learning (FL) have emerged as promising technologies for training a decentralized Machine Learning (ML) model.
We propose combining these technologies to address the continuous authentication challenge while protecting user privacy.
arXiv Detail & Related papers (2024-05-12T06:08:21Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Speed Up Federated Learning in Heterogeneous Environment: A Dynamic
Tiering Approach [5.504000607257414]
Federated learning (FL) enables collaboratively training a model while keeping the training data decentralized and private.
One significant impediment to training a model using FL, especially large models, is the resource constraints of devices with heterogeneous computation and communication capacities as well as varying task sizes.
We propose the Dynamic Tiering-based Federated Learning (DTFL) system where slower clients dynamically offload part of the model to the server to alleviate resource constraints and speed up training.
arXiv Detail & Related papers (2023-12-09T19:09:19Z) - Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating
Heterogeneous Lightweight Clients in IoT [34.128674870180596]
Federated learning (FL) enables multiple clients to train models collaboratively without sharing local data.
We propose pFedKnow, which generates lightweight personalized client models via neural network pruning techniques to reduce communication cost.
Experiment results on both image and text datasets show that the proposed pFedKnow outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2023-03-05T13:19:10Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.