Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices
- URL: http://arxiv.org/abs/2310.13981v2
- Date: Sun, 29 Oct 2023 02:34:47 GMT
- Title: Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices
- Authors: Peichun Li, Hanwen Zhang, Yuan Wu, Liping Qian, Rong Yu, Dusit Niyato,
Xuemin Shen
- Abstract summary: We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
- Score: 72.61177465035031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distributed Artificial Intelligence (AI) model training over mobile edge
networks encounters significant challenges due to the data and resource
heterogeneity of edge devices. The former hampers the convergence rate of the
global model, while the latter diminishes the devices' resource utilization
efficiency. In this paper, we propose a generative AI-empowered federated
learning to address these challenges by leveraging the idea of FIlling the
MIssing (FIMI) portion of local data. Specifically, FIMI can be considered as a
resource-aware data augmentation method that effectively mitigates the data
heterogeneity while ensuring efficient FL training. We first quantify the
relationship between the training data amount and the learning performance. We
then study the FIMI optimization problem with the objective of minimizing the
device-side overall energy consumption subject to required learning performance
constraints. The decomposition-based analysis and the cross-entropy searching
method are leveraged to derive the solution, where each device is assigned
suitable AI-synthesized data and resource utilization policy. Experiment
results demonstrate that FIMI can save up to 50% of the device-side energy to
achieve the target global test accuracy in comparison with the existing
methods. Meanwhile, FIMI can significantly enhance the converged global
accuracy under the non-independently-and-identically distribution (non-IID)
data.
Related papers
- Generative AI-Powered Plugin for Robust Federated Learning in Heterogeneous IoT Networks [3.536605202672355]
Federated learning enables edge devices to collaboratively train a global model while maintaining data privacy by keeping data localized.
We propose a novel plugin for federated optimization techniques that approximates Non-IID data distributions to IID through generative AI-enhanced data augmentation and balanced sampling strategy.
arXiv Detail & Related papers (2024-10-31T11:13:47Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Multi-Source to Multi-Target Decentralized Federated Domain Adaptation [15.681197161658835]
In this paper, we focus on varying quantities/distributions of labeled and unlabeled data across devices.
We develop a decentralized federated domain adaptation methodology which considers the transfer of ML models from devices with high quality labeled data to devices with low quality or unlabeled data.
Our methodology, Source-Target Determination and Link Formation (ST-LF), optimize both (i) classification of devices into sources and targets and (ii) source-target link formation.
arXiv Detail & Related papers (2023-04-24T19:57:13Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Training Keyword Spotting Models on Non-IID Data with Federated Learning [6.784774147680782]
We show that a production-quality keyword-spotting model can be trained on-device using federated learning.
To overcome the algorithmic constraints associated with fitting on-device data, we conduct thorough empirical studies of optimization algorithms.
We label examples (given the zero visibility into on-device data) to explore teacher-student training.
arXiv Detail & Related papers (2020-05-21T00:53:33Z) - Differentially Private Federated Learning for Resource-Constrained
Internet of Things [24.58409432248375]
Federated learning is capable of analyzing the large amount of data from a distributed set of smart devices without requiring them to upload their data to a central place.
This paper proposes a novel federated learning framework called DP-PASGD for training a machine learning model efficiently from the data stored across resource-constrained smart devices in IoT.
arXiv Detail & Related papers (2020-03-28T04:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.