Privacy-Preserving Serverless Edge Learning with Decentralized Small
Data
- URL: http://arxiv.org/abs/2111.14955v2
- Date: Wed, 1 Dec 2021 02:35:54 GMT
- Title: Privacy-Preserving Serverless Edge Learning with Decentralized Small
Data
- Authors: Shih-Chun Lin and Chia-Hung Lin
- Abstract summary: Distributed training strategies have recently become a promising approach to ensure data privacy when training deep models.
This paper extends conventional serverless platforms with serverless edge learning architectures and provides an efficient distributed training framework from the networking perspective.
- Score: 13.254530176359182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade, data-driven algorithms outperformed traditional
optimization-based algorithms in many research areas, such as computer vision,
natural language processing, etc. However, extensive data usages bring a new
challenge or even threat to deep learning algorithms, i.e., privacy-preserving.
Distributed training strategies have recently become a promising approach to
ensure data privacy when training deep models. This paper extends conventional
serverless platforms with serverless edge learning architectures and provides
an efficient distributed training framework from the networking perspective.
This framework dynamically orchestrates available resources among heterogeneous
physical units to efficiently fulfill deep learning objectives. The design
jointly considers learning task requests and underlying infrastructure
heterogeneity, including last-mile transmissions, computation abilities of
mobile devices, edge and cloud computing centers, and devices battery status.
Furthermore, to significantly reduce distributed training overheads,
small-scale data training is proposed by integrating with a general, simple
data classifier. This low-load enhancement can seamlessly work with various
distributed deep models to improve communications and computation efficiencies
during the training phase. Finally, open challenges and future research
directions encourage the research community to develop efficient distributed
deep learning techniques.
Related papers
- Research on Key Technologies for Cross-Cloud Federated Training of Large Language Models [7.762524368844918]
Cross-cloud federated training offers a new approach to addressing the resource bottlenecks of a single cloud platform.
This study analyzes the key technologies of cross-cloud federated training, including data partitioning and distribution, communication optimization, model aggregation algorithms, and the compatibility of heterogeneous cloud platforms.
arXiv Detail & Related papers (2024-10-24T19:57:17Z) - Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey [43.57122822150023]
This article surveys the literature on algorithms and technologies aimed at achieving efficient communication in large-scale distributed deep learning.
We first introduce efficient algorithms for model synchronization and communication data compression in the context of large-scale distributed training.
Next, we introduce efficient strategies related to resource allocation and task scheduling for use in distributed training and inference.
arXiv Detail & Related papers (2024-04-09T08:35:04Z) - An Empirical Study of Efficiency and Privacy of Federated Learning
Algorithms [2.994794762377111]
In today's world, the rapid expansion of IoT networks and the proliferation of smart devices have resulted in the generation of substantial amounts of heterogeneous data.
To handle this data effectively, advanced data processing technologies are necessary to guarantee the preservation of both privacy and efficiency.
Federated learning emerged as a distributed learning method that trains models locally and aggregates them on a server to preserve data privacy.
arXiv Detail & Related papers (2023-12-24T00:13:41Z) - A Novel Neural Network-Based Federated Learning System for Imbalanced
and Non-IID Data [2.9642661320713555]
Most machine learning algorithms rely heavily on large amount of data which may be collected from various sources.
To combat this issue, researchers have introduced federated learning, where a prediction model is learnt by ensuring the privacy of data of clients data.
In this research, we propose a centralized, neural network-based federated learning system.
arXiv Detail & Related papers (2023-11-16T17:14:07Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence [0.0]
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power.
This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed.
arXiv Detail & Related papers (2023-07-25T20:07:32Z) - On Efficient Training of Large-Scale Deep Learning Models: A Literature
Review [90.87691246153612]
The field of deep learning has witnessed significant progress, particularly in computer vision (CV), natural language processing (NLP), and speech.
The use of large-scale models trained on vast amounts of data holds immense promise for practical applications.
With the increasing demands on computational capacity, a comprehensive summarization on acceleration techniques of training deep learning models is still much anticipated.
arXiv Detail & Related papers (2023-04-07T11:13:23Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.