Lifelong Self-Adaptation: Self-Adaptation Meets Lifelong Machine
Learning
- URL: http://arxiv.org/abs/2204.01834v1
- Date: Mon, 4 Apr 2022 20:35:55 GMT
- Title: Lifelong Self-Adaptation: Self-Adaptation Meets Lifelong Machine
Learning
- Authors: Omid Gheibi, Danny Weyns
- Abstract summary: We present textitlifelong self-adaptation: a novel approach to self-adaptive systems that use machine learning techniques with a lifelong ML layer.
The lifelong ML layer tracks the running system and its environment, associates this knowledge with the current tasks, identifies new tasks based on differentiations, and updates the learning models of the self-adaptive system accordingly.
We present a reusable architecture for lifelong self-adaptation and apply it to the case of concept drift caused by unforeseen changes of the input data of a learning model that is used for decision-making in self-adaptation.
- Score: 14.893661749381868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past years, machine learning (ML) has become a popular approach to
support self-adaptation. While ML techniques enable dealing with several
problems in self-adaptation, such as scalable decision-making, they are also
subject to inherent challenges. In this paper, we focus on one such challenge
that is particularly important for self-adaptation: ML techniques are designed
to deal with a set of predefined tasks associated with an operational domain;
they have problems to deal with new emerging tasks, such as concept shift in
input data that is used for learning. To tackle this challenge, we present
\textit{lifelong self-adaptation}: a novel approach to self-adaptation that
enhances self-adaptive systems that use ML techniques with a lifelong ML layer.
The lifelong ML layer tracks the running system and its environment, associates
this knowledge with the current tasks, identifies new tasks based on
differentiations, and updates the learning models of the self-adaptive system
accordingly. We present a reusable architecture for lifelong self-adaptation
and apply it to the case of concept drift caused by unforeseen changes of the
input data of a learning model that is used for decision-making in
self-adaptation. We validate lifelong self-adaptation for two types of concept
drift using two cases.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Online ML Self-adaptation in Face of Traps [5.8790300501137684]
We discuss several traps that relate to the specification and online training of the ML-based estimators, their impact on self-adaptation, and the approach used to evaluate the estimators.
Our overview of these traps provides a list of lessons learned, which can serve as guidance for other researchers and practitioners when applying online ML for self-adaptation.
arXiv Detail & Related papers (2023-09-11T20:17:11Z) - Towards Self-Adaptive Machine Learning-Enabled Systems Through QoS-Aware
Model Switching [1.2277343096128712]
We propose the concept of a Machine Learning Model Balancer, focusing on managing uncertainties related to ML models by using multiple models.
AdaMLS is a novel self-adaptation approach that leverages this concept and extends the traditional MAPE-K loop for continuous MLS adaptation.
Preliminary results suggest AdaMLS surpasses naive and single state-of-the-art models in guarantees.
arXiv Detail & Related papers (2023-08-19T09:33:51Z) - Reducing Large Adaptation Spaces in Self-Adaptive Systems Using Machine
Learning [10.444983001376874]
We present ML2ASR+, short for Machine Learning to Adaptation Space Reduction Plus.
We evaluate ML2ASR+ for two applications with different sizes of adaptation spaces: an Internet-of-Things application and a service-based system.
The results demonstrate that ML2ASR+ can be applied to deal with different types of goals and is able to reduce the adaptation space and hence the time to make adaptation decisions with over 90%, with negligible effect on the realization of the adaptation goals.
arXiv Detail & Related papers (2023-06-02T09:49:33Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Dealing with Drift of Adaptation Spaces in Learning-based Self-Adaptive
Systems using Lifelong Self-Adaptation [10.852698169509006]
We focus on a particularly important challenge for learning-based self-adaptive systems: drift in adaptation spaces.
Drift of adaptation spaces originates from uncertainties, affecting the quality properties of the adaptation options.
We present a novel approach to self-adaptation that enhances learning-based self-adaptive systems with a lifelong ML layer.
arXiv Detail & Related papers (2022-11-04T07:45:48Z) - Fully Online Meta-Learning Without Task Boundaries [80.09124768759564]
We study how meta-learning can be applied to tackle online problems of this nature.
We propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries.
Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods.
arXiv Detail & Related papers (2022-02-01T07:51:24Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.