Loss Tolerant Federated Learning
- URL: http://arxiv.org/abs/2105.03591v1
- Date: Sat, 8 May 2021 04:44:47 GMT
- Title: Loss Tolerant Federated Learning
- Authors: Pengyuan Zhou, Pei Fang, Pan Hui
- Abstract summary: In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization.
We use ThrowRightAway (TRA) to accelerate the data uploading for low-bandwidth-devices by intentionally ignoring some packet losses.
The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction.
- Score: 6.595005044268588
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning has attracted attention in recent years for
collaboratively training data on distributed devices with privacy-preservation.
The limited network capacity of mobile and IoT devices has been seen as one of
the major challenges for cross-device federated learning. Recent solutions have
been focusing on threshold-based client selection schemes to guarantee the
communication efficiency. However, we find this approach can cause biased
client selection and results in deteriorated performance. Moreover, we find
that the challenge of network limit may be overstated in some cases and the
packet loss is not always harmful. In this paper, we explore the loss tolerant
federated learning (LT-FL) in terms of aggregation, fairness, and
personalization. We use ThrowRightAway (TRA) to accelerate the data uploading
for low-bandwidth-devices by intentionally ignoring some packet losses. The
results suggest that, with proper integration, TRA and other algorithms can
together guarantee the personalization and fairness performance in the face of
packet loss below a certain fraction (10%-30%).
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Edge-device Collaborative Computing for Multi-view Classification [9.047284788663776]
We explore collaborative inference at the edge, in which edge nodes and end devices share correlated data and the inference computational burden.
We introduce selective schemes that decrease bandwidth resource consumption by effectively reducing data redundancy.
Experimental results highlight that selective collaborative schemes can achieve different trade-offs between the above performance metrics.
arXiv Detail & Related papers (2024-09-24T11:07:33Z) - Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving
for Internet of Things [4.68267059122563]
We present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers.
In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data.
We also propose a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks.
arXiv Detail & Related papers (2023-11-08T05:14:41Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Mixing between the Cross Entropy and the Expectation Loss Terms [89.30385901335323]
Cross entropy loss tends to focus on hard to classify samples during training.
We show that adding to the optimization goal the expectation loss helps the network to achieve better accuracy.
Our experiments show that the new training protocol improves performance across a diverse set of classification domains.
arXiv Detail & Related papers (2021-09-12T23:14:06Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Quantized Federated Learning under Transmission Delay and Outage
Constraints [30.892724364965005]
Federated learning is a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge.
In practical systems with limited radio resources, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO)
We propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability.
arXiv Detail & Related papers (2021-06-17T11:29:12Z) - Packet-Loss-Tolerant Split Inference for Delay-Sensitive Deep Learning
in Lossy Wireless Networks [4.932130498861988]
In distributed inference, computational tasks are offloaded from the IoT device to other devices or the edge server via lossy IoT networks.
narrow-band and lossy IoT networks cause non-negligible packet losses and retransmissions, resulting in non-negligible communication latency.
We propose a split inference with no retransmissions (SI-NR) method that achieves high accuracy without any retransmissions, even when packet loss occurs.
arXiv Detail & Related papers (2021-04-28T08:28:22Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.