Online ML Self-adaptation in Face of Traps
- URL: http://arxiv.org/abs/2309.05805v1
- Date: Mon, 11 Sep 2023 20:17:11 GMT
- Title: Online ML Self-adaptation in Face of Traps
- Authors: Michal T\"opfer, Franti\v{s}ek Pl\'a\v{s}il, Tom\'a\v{s} Bure\v{s},
Petr Hn\v{e}tynka, Martin Kruli\v{s}, Danny Weyns
- Abstract summary: We discuss several traps that relate to the specification and online training of the ML-based estimators, their impact on self-adaptation, and the approach used to evaluate the estimators.
Our overview of these traps provides a list of lessons learned, which can serve as guidance for other researchers and practitioners when applying online ML for self-adaptation.
- Score: 5.8790300501137684
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online machine learning (ML) is often used in self-adaptive systems to
strengthen the adaptation mechanism and improve the system utility. Despite
such benefits, applying online ML for self-adaptation can be challenging, and
not many papers report its limitations. Recently, we experimented with applying
online ML for self-adaptation of a smart farming scenario and we had faced
several unexpected difficulties -- traps -- that, to our knowledge, are not
discussed enough in the community. In this paper, we report our experience with
these traps. Specifically, we discuss several traps that relate to the
specification and online training of the ML-based estimators, their impact on
self-adaptation, and the approach used to evaluate the estimators. Our overview
of these traps provides a list of lessons learned, which can serve as guidance
for other researchers and practitioners when applying online ML for
self-adaptation.
Related papers
- Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - SWITCH: An Exemplar for Evaluating Self-Adaptive ML-Enabled Systems [1.2277343096128712]
Machine Learning-Enabled Systems (MLS) is crucial for maintaining Quality of Service (QoS)
The Machine Learning Model Balancer is a concept that addresses these uncertainties by facilitating dynamic ML model switching.
This paper introduces SWITCH, an exemplar developed to enhance self-adaptive capabilities in such systems.
arXiv Detail & Related papers (2024-02-09T11:56:44Z) - A Multivocal Literature Review on the Benefits and Limitations of
Automated Machine Learning Tools [9.69672653683112]
We conducted a multivocal literature review, which allowed us to identify 54 sources from the academic literature and 108 sources from the grey literature reporting on AutoML benefits and limitations.
Concerning the benefits, we highlight that AutoML tools can help streamline the core steps of ML.
We highlight several limitations that may represent obstacles to the widespread adoption of AutoML.
arXiv Detail & Related papers (2024-01-21T01:39:39Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Towards Self-Adaptive Machine Learning-Enabled Systems Through QoS-Aware
Model Switching [1.2277343096128712]
We propose the concept of a Machine Learning Model Balancer, focusing on managing uncertainties related to ML models by using multiple models.
AdaMLS is a novel self-adaptation approach that leverages this concept and extends the traditional MAPE-K loop for continuous MLS adaptation.
Preliminary results suggest AdaMLS surpasses naive and single state-of-the-art models in guarantees.
arXiv Detail & Related papers (2023-08-19T09:33:51Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Lifelong Self-Adaptation: Self-Adaptation Meets Lifelong Machine
Learning [14.893661749381868]
We present textitlifelong self-adaptation: a novel approach to self-adaptive systems that use machine learning techniques with a lifelong ML layer.
The lifelong ML layer tracks the running system and its environment, associates this knowledge with the current tasks, identifies new tasks based on differentiations, and updates the learning models of the self-adaptive system accordingly.
We present a reusable architecture for lifelong self-adaptation and apply it to the case of concept drift caused by unforeseen changes of the input data of a learning model that is used for decision-making in self-adaptation.
arXiv Detail & Related papers (2022-04-04T20:35:55Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online Target Q-learning with Reverse Experience Replay: Efficiently
finding the Optimal Policy for Linear MDPs [50.75812033462294]
We bridge the gap between practical success of Q-learning and pessimistic theoretical results.
We present novel methods Q-Rex and Q-RexDaRe.
We show that Q-Rex efficiently finds the optimal policy for linear MDPs.
arXiv Detail & Related papers (2021-10-16T01:47:41Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Insights into Performance Fitness and Error Metrics for Machine Learning [1.827510863075184]
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis.
This paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms.
arXiv Detail & Related papers (2020-05-17T22:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.