A Credibility-aware Swarm-Federated Deep Learning Framework in Internet
of Vehicles
- URL: http://arxiv.org/abs/2108.03981v1
- Date: Mon, 9 Aug 2021 12:33:54 GMT
- Title: A Credibility-aware Swarm-Federated Deep Learning Framework in Internet
of Vehicles
- Authors: Zhe Wang, Xinhang Li, Tianhao Wu, Chen Xu, Lin Zhang
- Abstract summary: Federated Deep Learning (FDL) is helping to realize distributed machine learning in the Internet of Vehicles (IoV)
This paper proposes a Swarm-Federated Deep Learning framework in the IoV system (IoV-SFDL) that integrates SL into the FDL framework.
- Score: 14.068813113859338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Deep Learning (FDL) is helping to realize distributed machine
learning in the Internet of Vehicles (IoV). However, FDL's global model needs
multiple clients to upload learning model parameters, thus still existing
unavoidable communication overhead and data privacy risks. The recently
proposed Swarm Learning (SL) provides a decentralized machine-learning approach
uniting edge computing and blockchain-based coordination without the need for a
central coordinator. This paper proposes a Swarm-Federated Deep Learning
framework in the IoV system (IoV-SFDL) that integrates SL into the FDL
framework. The IoV-SFDL organizes vehicles to generate local SL models with
adjacent vehicles based on the blockchain empowered SL, then aggregates the
global FDL model among different SL groups with a proposed credibility weights
prediction algorithm. Extensive experimental results demonstrate that compared
with the baseline frameworks, the proposed IoV-SFDL framework achieves a 16.72%
reduction in edge-to-global communication overhead while improving about 5.02%
in model performance with the same training iterations.
Related papers
- Adaptive and Parallel Split Federated Learning in Vehicular Edge Computing [6.004901615052089]
Vehicular edge intelligence (VEI) is a promising paradigm for enabling future intelligent transportation systems.
Federated learning (FL) is one of the fundamental technologies facilitating collaborative model training locally and aggregation.
We develop an Adaptive Split Federated Learning scheme for Vehicular Edge Computing (ASFV)
arXiv Detail & Related papers (2024-05-29T02:34:38Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving [67.78611905156808]
This paper proposes an optimization-based Communication Resource Constrained Hierarchical Federated Learning framework.
Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of federated learning autonomous driving model.
arXiv Detail & Related papers (2023-06-28T12:44:59Z) - PFSL: Personalized & Fair Split Learning with Data & Label Privacy for
thin clients [0.5144809478361603]
PFSL is a new framework of distributed split learning where a large number of thin clients perform transfer learning in parallel.
We implement a lightweight step of personalization of client models to provide high performance for their respective data distributions.
Our accuracy far exceeds that of current algorithms SL and is very close to that of centralized learning on several real-life benchmarks.
arXiv Detail & Related papers (2023-03-19T10:38:29Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Enhanced Decentralized Federated Learning based on Consensus in
Connected Vehicles [14.80476265018825]
Federated learning (FL) is emerging as a new paradigm to train machine learning (ML) models in distributed systems.
We introduce C-DFL (Consensus based Decentralized Federated Learning) to tackle federated learning on connected vehicles.
arXiv Detail & Related papers (2022-09-22T01:21:23Z) - An Efficient and Reliable Asynchronous Federated Learning Scheme for
Smart Public Transportation [24.8522516507395]
Federated learning (FL) is a distributed machine learning scheme that allows vehicles to receive continuous model updates without having to upload raw data to the cloud.
This paper offers a blockchain-based asynchronous federated learning scheme with a dynamic scaling factor (DBAFL)
Experiments conducted on heterogeneous devices validate outperformed learning performance, efficiency, and reliability of DBAFL.
arXiv Detail & Related papers (2022-08-15T13:56:29Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - IPLS : A Framework for Decentralized Federated Learning [6.6271520914941435]
We introduce IPLS, a fully decentralized federated learning framework that is partially based on the interplanetary file system (IPFS)
IPLS scales with the number of participants, is robust against intermittent connectivity and dynamic participant departures/arrivals, requires minimal resources, and guarantees that the accuracy of the trained model quickly converges to that of a centralized FL framework with an accuracy drop of less than one per thousand.
arXiv Detail & Related papers (2021-01-06T07:44:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.