HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse
- URL: http://arxiv.org/abs/2211.03300v1
- Date: Mon, 7 Nov 2022 04:33:24 GMT
- Title: HFedMS: Heterogeneous Federated Learning with Memorable Data Semantics
in Industrial Metaverse
- Authors: Shenglai Zeng, Zonghang Li, Hongfang Yu, Zhihao Zhang, Long Luo, Bo
Li, Dusit Niyato
- Abstract summary: This paper presents HFEDMS for incorporating practical FL into the emerging Industrial Metaverse.
It reduces data heterogeneity through dynamic grouping and training mode conversion.
Then, it compensates for the forgotten knowledge by fusing compressed historical data semantics.
Experiments have been conducted on the streamed non-i.i.d. FEMNIST dataset using 368 simulated devices.
- Score: 49.1501082763252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL), as a rapidly evolving privacy-preserving
collaborative machine learning paradigm, is a promising approach to enable edge
intelligence in the emerging Industrial Metaverse. Even though many successful
use cases have proved the feasibility of FL in theory, in the industrial
practice of Metaverse, the problems of non-independent and identically
distributed (non-i.i.d.) data, learning forgetting caused by streaming
industrial data, and scarce communication bandwidth remain key barriers to
realize practical FL. Facing the above three challenges simultaneously, this
paper presents a high-performance and efficient system named HFEDMS for
incorporating practical FL into Industrial Metaverse. HFEDMS reduces data
heterogeneity through dynamic grouping and training mode conversion (Dynamic
Sequential-to-Parallel Training, STP). Then, it compensates for the forgotten
knowledge by fusing compressed historical data semantics and calibrates
classifier parameters (Semantic Compression and Compensation, SCC). Finally,
the network parameters of the feature extractor and classifier are synchronized
in different frequencies (Layer-wiseAlternative Synchronization Protocol, LASP)
to reduce communication costs. These techniques make FL more adaptable to the
heterogeneous streaming data continuously generated by industrial equipment,
and are also more efficient in communication than traditional methods (e.g.,
Federated Averaging). Extensive experiments have been conducted on the streamed
non-i.i.d. FEMNIST dataset using 368 simulated devices. Numerical results show
that HFEDMS improves the classification accuracy by at least 6.4% compared with
8 benchmarks and saves both the overall runtime and transfer bytes by up to
98%, proving its superiority in precision and efficiency.
Related papers
- FedFT: Improving Communication Performance for Federated Learning with Frequency Space Transformation [0.361593752383807]
We introduce FedFT (federated frequency-space transformation), a simple yet effective methodology for communicating model parameters in a Federated Learning setting.
FedFT uses Discrete Cosine Transform (DCT) to represent model parameters in frequency space, enabling efficient compression and reducing communication overhead.
We demonstrate the generalisability of the FedFT methodology on four datasets using comparative studies with three state-of-the-art FL baselines.
arXiv Detail & Related papers (2024-09-08T23:05:35Z) - Lightweight Industrial Cohorted Federated Learning for Heterogeneous Assets [0.0]
Federated Learning (FL) is the most widely adopted collaborative learning approach for training decentralized Machine Learning (ML) models.
However, since great data similarity or homogeneity is taken for granted in all FL tasks, FL is still not specifically designed for the industrial setting.
We propose a Lightweight Industrial Cohorted FL (LICFL) algorithm that uses model parameters for cohorting without any additional on-edge (clientlevel) computations and communications.
arXiv Detail & Related papers (2024-07-25T12:48:56Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Data Heterogeneity-Robust Federated Learning via Group Client Selection
in Industrial IoT [57.67687126339891]
FedGS is a hierarchical cloud-edge-end FL framework for 5G empowered industries.
Taking advantage of naturally clustered factory devices, FedGS uses a gradient-based binary permutation algorithm.
Experiments show that FedGS improves accuracy by 3.5% and reduces training rounds by 59% on average.
arXiv Detail & Related papers (2022-02-03T10:48:17Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Towards Heterogeneous Clients with Elastic Federated Learning [45.2715985913761]
Federated learning involves training machine learning models over devices or data silos, such as edge processors or data warehouses, while keeping the data local.
We propose Elastic Federated Learning (EFL), an unbiased algorithm to tackle the heterogeneity in the system.
It is an efficient and effective algorithm that compresses both upstream and downstream communications.
arXiv Detail & Related papers (2021-06-17T12:30:40Z) - Efficient Ring-topology Decentralized Federated Learning with Deep
Generative Models for Industrial Artificial Intelligent [13.982904025739606]
We propose a ring-topogy based decentralized federated learning scheme for Deep Generative Models (DGMs)
Our RDFL schemes provides communication efficiency and maintain training performance to boost DGMs in target IIoT tasks.
In addition, InterPlanetary File System(IPFS) is introduced to further improve communication efficiency and FL security.
arXiv Detail & Related papers (2021-04-15T08:09:54Z) - Coded Computing for Federated Learning at the Edge [3.385874614913973]
Federated Learning (FL) enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
Recent work proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server.
We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels.
arXiv Detail & Related papers (2020-07-07T08:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.