Training Machine Learning models at the Edge: A Survey
- URL: http://arxiv.org/abs/2403.02619v3
- Date: Fri, 11 Oct 2024 03:11:36 GMT
- Title: Training Machine Learning models at the Edge: A Survey
- Authors: Aymen Rayane Khouas, Mohamed Reda Bouadjenek, Hakim Hacid, Sunil Aryal,
- Abstract summary: This survey explores the concept of edge learning, specifically the optimization of Machine Learning model training at the edge.
relevant literature on edge learning was identified, revealing a concentration of research efforts in distributed learning methods.
This survey further provides a guideline for comparing techniques used to optimize ML for edge learning, along with an exploration of the different frameworks, libraries, and simulation tools available.
- Score: 2.8449839307925955
- License:
- Abstract: Edge computing has gained significant traction in recent years, promising enhanced efficiency by integrating artificial intelligence capabilities at the edge. While the focus has primarily been on the deployment and inference of Machine Learning (ML) models at the edge, the training aspect remains less explored. This survey, explores the concept of edge learning, specifically the optimization of ML model training at the edge. The objective is to comprehensively explore diverse approaches and methodologies in edge learning, synthesize existing knowledge, identify challenges, and highlight future trends. Utilizing Scopus and Web of science advanced search, relevant literature on edge learning was identified, revealing a concentration of research efforts in distributed learning methods, particularly federated learning. This survey further provides a guideline for comparing techniques used to optimize ML for edge learning, along with an exploration of the different frameworks, libraries, and simulation tools available. In doing so, the paper contributes to a holistic understanding of the current landscape and future directions in the intersection of edge computing and machine learning, paving the way for informed comparisons between optimization methods and techniques designed for training on the edge.
Related papers
- Deep Learning and Machine Learning -- Object Detection and Semantic Segmentation: From Theory to Applications [17.571124565519263]
Book covers state-of-the-art advancements in machine learning and deep learning.
Focuses on convolutional neural networks (CNNs), YOLO architectures, and transformer-based approaches like DETR.
Book also delves into the integration of artificial intelligence (AI) techniques and large language models for enhanced object detection.
arXiv Detail & Related papers (2024-10-21T02:10:49Z) - Exploring the landscape of large language models: Foundations, techniques, and challenges [8.042562891309414]
The article sheds light on the mechanics of in-context learning and a spectrum of fine-tuning approaches.
It explores how LLMs can be more closely aligned with human preferences through innovative reinforcement learning frameworks.
The ethical dimensions of LLM deployment are discussed, underscoring the need for mindful and responsible application.
arXiv Detail & Related papers (2024-04-18T08:01:20Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - End-to-End Constrained Optimization Learning: A Survey [69.22203885491534]
It focuses on surveying the work on integrating solvers and optimization methods with machine learning architectures.
These approaches hold the promise to develop new hybrid machine learning and optimization methods to predict fast, approximate, structural, solutions to problems and to enable logical inference.
arXiv Detail & Related papers (2021-03-30T14:19:30Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning [106.51930957941433]
In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
arXiv Detail & Related papers (2020-05-31T12:45:06Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.