Towards Communication-efficient and Attack-Resistant Federated Edge
Learning for Industrial Internet of Things
- URL: http://arxiv.org/abs/2012.04436v1
- Date: Tue, 8 Dec 2020 14:11:32 GMT
- Title: Towards Communication-efficient and Attack-Resistant Federated Edge
Learning for Industrial Internet of Things
- Authors: Yi Liu, Ruihui Zhao, Jiawen Kang, Abdulsalam Yassine, Dusit Niyato,
Jialiang Peng
- Abstract summary: Federated Edge Learning (FEL) allows edge nodes to train a global deep learning model collaboratively for edge computing in the Industrial Internet of Things (IIoT)
FEL faces two critical challenges: communication overhead and data privacy.
We propose a communication-efficient and privacy-enhanced asynchronous FEL framework for edge computing in IIoT.
- Score: 40.20432511421245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Edge Learning (FEL) allows edge nodes to train a global deep
learning model collaboratively for edge computing in the Industrial Internet of
Things (IIoT), which significantly promotes the development of Industrial 4.0.
However, FEL faces two critical challenges: communication overhead and data
privacy. FEL suffers from expensive communication overhead when training
large-scale multi-node models. Furthermore, due to the vulnerability of FEL to
gradient leakage and label-flipping attacks, the training process of the global
model is easily compromised by adversaries. To address these challenges, we
propose a communication-efficient and privacy-enhanced asynchronous FEL
framework for edge computing in IIoT. First, we introduce an asynchronous model
update scheme to reduce the computation time that edge nodes wait for global
model aggregation. Second, we propose an asynchronous local differential
privacy mechanism, which improves communication efficiency and mitigates
gradient leakage attacks by adding well-designed noise to the gradients of edge
nodes. Third, we design a cloud-side malicious node detection mechanism to
detect malicious nodes by testing the local model quality. Such a mechanism can
avoid malicious nodes participating in training to mitigate label-flipping
attacks. Extensive experimental studies on two real-world datasets demonstrate
that the proposed framework can not only improve communication efficiency but
also mitigate malicious attacks while its accuracy is comparable to traditional
FEL frameworks.
Related papers
- Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning [9.900317349372383]
Federated Learning (FL) provides a privacy-preserving framework for training machine learning models on mobile edge devices.
Traditional FL algorithms, e.g., FedAvg, impose a heavy communication workload on these devices.
We propose a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls.
Our goal is to enhance the training efficiency of the HFEL system through strategic resource allocation and topology design.
arXiv Detail & Related papers (2024-09-29T01:48:04Z) - SHFL: Secure Hierarchical Federated Learning Framework for Edge Networks [26.482930943380918]
Federated Learning (FL) is a distributed machine learning paradigm designed for privacy-sensitive applications that run on resource-constrained devices with non-Identically and Independently Distributed (IID) data.
Traditional FL frameworks adopt the client-server model with a single-level aggregation process, where the server builds the global model by aggregating all trained local models received from client devices.
arXiv Detail & Related papers (2024-09-23T14:38:20Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FeDiSa: A Semi-asynchronous Federated Learning Framework for Power
System Fault and Cyberattack Discrimination [1.0621485365427565]
This paper proposes FeDiSa, a novel Semi-asynchronous Federated learning framework for power system faults and cyberattack Discrimination.
Experiments on the proposed framework using publicly available industrial control systems datasets reveal superior attack detection accuracy whilst preserving data confidentiality and minimizing the adverse effects of communication latency and stragglers.
arXiv Detail & Related papers (2023-03-28T13:34:38Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Semi-Decentralized Federated Edge Learning with Data and Device
Heterogeneity [6.341508488542275]
Federated edge learning (FEEL) has attracted much attention as a privacy-preserving paradigm to effectively incorporate the distributed data at the network edge for training deep learning models.
In this paper, we investigate a novel framework of FEEL, namely semi-decentralized federated edge learning (SD-FEEL), where multiple edge servers are employed to collectively coordinate a large number of client nodes.
By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL can incorporate more training data, while enjoying much lower latency compared with conventional federated learning.
arXiv Detail & Related papers (2021-12-20T03:06:08Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.