Training Machine Learning models at the Edge: A Survey
- URL: http://arxiv.org/abs/2403.02619v2
- Date: Wed, 13 Mar 2024 07:19:06 GMT
- Title: Training Machine Learning models at the Edge: A Survey
- Authors: Aymen Rayane Khouas, Mohamed Reda Bouadjenek, Hakim Hacid, and Sunil
Aryal
- Abstract summary: This survey delves into Edge Learning (EL), specifically the optimization of Machine Learning model training at the edge.
The objective is to comprehensively explore diverse approaches and methodologies in EL, synthesize existing knowledge, identify challenges, and highlight future trends.
This survey further provides a guideline for comparing techniques used to optimize ML for edge learning, along with an exploration of different frameworks, libraries, and simulation tools available for EL.
- Score: 2.8449839307925955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge Computing (EC) has gained significant traction in recent years,
promising enhanced efficiency by integrating Artificial Intelligence (AI)
capabilities at the edge. While the focus has primarily been on the deployment
and inference of Machine Learning (ML) models at the edge, the training aspect
remains less explored. This survey delves into Edge Learning (EL), specifically
the optimization of ML model training at the edge. The objective is to
comprehensively explore diverse approaches and methodologies in EL, synthesize
existing knowledge, identify challenges, and highlight future trends. Utilizing
Scopus' advanced search, relevant literature on EL was identified, revealing a
concentration of research efforts in distributed learning methods, particularly
Federated Learning (FL). This survey further provides a guideline for comparing
techniques used to optimize ML for edge learning, along with an exploration of
different frameworks, libraries, and simulation tools available for EL. In
doing so, the paper contributes to a holistic understanding of the current
landscape and future directions in the intersection of edge computing and
machine learning, paving the way for informed comparisons between optimization
methods and techniques designed for edge learning.
Related papers
- Exploring the landscape of large language models: Foundations, techniques, and challenges [8.042562891309414]
The article sheds light on the mechanics of in-context learning and a spectrum of fine-tuning approaches.
It explores how LLMs can be more closely aligned with human preferences through innovative reinforcement learning frameworks.
The ethical dimensions of LLM deployment are discussed, underscoring the need for mindful and responsible application.
arXiv Detail & Related papers (2024-04-18T08:01:20Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Machine Unlearning of Pre-trained Large Language Models [17.40601262379265]
This study investigates the concept of the right to be forgotten' within the context of large language models (LLMs)
We explore machine unlearning as a pivotal solution, with a focus on pre-trained models.
arXiv Detail & Related papers (2024-02-23T07:43:26Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - A Comprehensive Review and a Taxonomy of Edge Machine Learning:
Requirements, Paradigms, and Techniques [5.964672966134971]
The union of Edge Computing (EC) and Artificial Intelligence (AI) has brought forward the Edge AI concept to provide intelligent solutions close to the end-user environment.
Machine Learning (ML), as the most advanced branch of AI in the past few years, has shown encouraging results and applications in the edge environment.
This paper aims at providing a comprehensive taxonomy and a systematic review of Edge ML techniques.
arXiv Detail & Related papers (2023-02-16T20:33:33Z) - Resource allocation optimization using artificial intelligence methods
in various computing paradigms: A Review [7.738849852406729]
This paper presents a comprehensive literature review on the application of artificial intelligence (AI) methods for resource allocation optimization.
To the best of our knowledge, there are no existing reviews on AI-based resource allocation approaches in different computational paradigms.
arXiv Detail & Related papers (2022-03-23T10:31:15Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning [106.51930957941433]
In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
arXiv Detail & Related papers (2020-05-31T12:45:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.