Deep Learning for Effective and Efficient Reduction of Large Adaptation
Spaces in Self-Adaptive Systems
- URL: http://arxiv.org/abs/2204.06254v1
- Date: Wed, 13 Apr 2022 08:51:06 GMT
- Title: Deep Learning for Effective and Efficient Reduction of Large Adaptation
Spaces in Self-Adaptive Systems
- Authors: Danny Weyns and Omid Gheibi and Federico Quin and Jeroen Van Der
Donckt
- Abstract summary: We present 'Deep Learning for Adaptation Space Reduction Plus' -- DLASeR+ in short.
DLASeR+ offers an extendable learning framework for online adaptation space reduction.
It supports three common types of adaptation goals: threshold, optimization, and set-point goals.
Results show that DLASeR+ is effective with a negligible effect on the realization of the adaptation goals.
- Score: 12.341380735802568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many software systems today face uncertain operating conditions, such as
sudden changes in the availability of resources or unexpected user behavior.
Without proper mitigation these uncertainties can jeopardize the system goals.
Self-adaptation is a common approach to tackle such uncertainties. When the
system goals may be compromised, the self-adaptive system has to select the
best adaptation option to reconfigure by analyzing the possible adaptation
options, i.e., the adaptation space. Yet, analyzing large adaptation spaces
using rigorous methods can be resource- and time-consuming, or even be
infeasible. One approach to tackle this problem is by using online machine
learning to reduce adaptation spaces. However, existing approaches require
domain expertise to perform feature engineering to define the learner, and
support online adaptation space reduction only for specific goals. To tackle
these limitations, we present 'Deep Learning for Adaptation Space Reduction
Plus' -- DLASeR+ in short. DLASeR+ offers an extendable learning framework for
online adaptation space reduction that does not require feature engineering,
while supporting three common types of adaptation goals: threshold,
optimization, and set-point goals. We evaluate DLASeR+ on two instances of an
Internet-of-Things application with increasing sizes of adaptation spaces for
different combinations of adaptation goals. We compare DLASeR+ with a baseline
that applies exhaustive analysis and two state-of-the-art approaches for
adaptation space reduction that rely on learning. Results show that DLASeR+ is
effective with a negligible effect on the realization of the adaptation goals
compared to an exhaustive analysis approach, and supports three common types of
adaptation goals beyond the state-of-the-art approaches.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Can Learned Optimization Make Reinforcement Learning Less Difficult? [70.5036361852812]
We consider whether learned optimization can help overcome reinforcement learning difficulties.
Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed to these difficulties.
arXiv Detail & Related papers (2024-07-09T17:55:23Z) - Reimagining Self-Adaptation in the Age of Large Language Models [0.9999629695552195]
This paper presents a vision for using Generative AI (GenAI) to enhance the effectiveness and efficiency of architectural adaptation.
Drawing parallels with human operators, we propose that Large Language Models (LLMs) can autonomously generate context-sensitive adaptation strategies.
Our findings suggest that GenAI has significant potential to improve software systems' dynamic adaptability and resilience.
arXiv Detail & Related papers (2024-04-15T15:30:12Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - Reducing Large Adaptation Spaces in Self-Adaptive Systems Using Machine
Learning [10.444983001376874]
We present ML2ASR+, short for Machine Learning to Adaptation Space Reduction Plus.
We evaluate ML2ASR+ for two applications with different sizes of adaptation spaces: an Internet-of-Things application and a service-based system.
The results demonstrate that ML2ASR+ can be applied to deal with different types of goals and is able to reduce the adaptation space and hence the time to make adaptation decisions with over 90%, with negligible effect on the realization of the adaptation goals.
arXiv Detail & Related papers (2023-06-02T09:49:33Z) - Using Genetic Programming to Build Self-Adaptivity into Software-Defined
Networks [21.081978372435184]
Self-adaptation solutions need to periodically monitor, reason about, and adapt a running system.
We propose a self-adaptation solution that continuously learns and updates the control constructs in the data-forwarding logic of a software-defined network.
arXiv Detail & Related papers (2023-06-01T03:30:33Z) - Dealing with Drift of Adaptation Spaces in Learning-based Self-Adaptive
Systems using Lifelong Self-Adaptation [10.852698169509006]
We focus on a particularly important challenge for learning-based self-adaptive systems: drift in adaptation spaces.
Drift of adaptation spaces originates from uncertainties, affecting the quality properties of the adaptation options.
We present a novel approach to self-adaptation that enhances learning-based self-adaptive systems with a lifelong ML layer.
arXiv Detail & Related papers (2022-11-04T07:45:48Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Towards Self-Adaptive Metric Learning On the Fly [16.61982837441342]
We aim to address the open challenge of "Online Adaptive Metric Learning" (OAML) for learning adaptive metric functions on the fly.
Unlike traditional online metric learning methods, OAML is significantly more challenging since the learned metric could be non-linear and the model has to be self-adaptive.
We present a new online metric learning framework that attempts to tackle the challenge by learning an ANN-based metric with adaptive model complexity from a stream of constraints.
arXiv Detail & Related papers (2021-04-03T23:11:52Z) - Adapting User Interfaces with Model-based Reinforcement Learning [47.469980921522115]
Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user.
We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy.
arXiv Detail & Related papers (2021-03-11T17:24:34Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.