Adaptation to Unknown Situations as the Holy Grail of Learning-Based
Self-Adaptive Systems: Research Directions
- URL: http://arxiv.org/abs/2103.06908v1
- Date: Thu, 11 Mar 2021 19:07:02 GMT
- Title: Adaptation to Unknown Situations as the Holy Grail of Learning-Based
Self-Adaptive Systems: Research Directions
- Authors: Ivana Dusparic, Nicolas Cardozo
- Abstract summary: We argue that adapting to unknown situations is the ultimate challenge for self-adaptive systems.
We close discussing whether, even when we can, we should indeed build systems that define their own behaviour and adapt their goals, without involving a human supervisor.
- Score: 3.7311680121118345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-adaptive systems continuously adapt to changes in their execution
environment. Capturing all possible changes to define suitable behaviour
beforehand is unfeasible, or even impossible in the case of unknown changes,
hence human intervention may be required. We argue that adapting to unknown
situations is the ultimate challenge for self-adaptive systems. Learning-based
approaches are used to learn the suitable behaviour to exhibit in the case of
unknown situations, to minimize or fully remove human intervention. While such
approaches can, to a certain extent, generalize existing adaptations to new
situations, there is a number of breakthroughs that need to be achieved before
systems can adapt to general unknown and unforeseen situations. We posit the
research directions that need to be explored to achieve unanticipated
adaptation from the perspective of learning-based self-adaptive systems. At
minimum, systems need to define internal representations of previously unseen
situations on-the-fly, extrapolate the relationship to the previously
encountered situations to evolve existing adaptations, and reason about the
feasibility of achieving their intrinsic goals in the new set of conditions. We
close discussing whether, even when we can, we should indeed build systems that
define their own behaviour and adapt their goals, without involving a human
supervisor.
Related papers
- Metacognition for Unknown Situations and Environments (MUSE) [3.2020845462590697]
We propose the Metacognition for Unknown Situations and Environments (MUSE) framework.
MUSE integrates metacognitive processes--specifically self-awareness and self-regulation--into autonomous agents.
Agents show significant improvements in self-awareness and self-regulation.
arXiv Detail & Related papers (2024-11-20T18:41:03Z) - Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Adapt On-the-Go: Behavior Modulation for Single-Life Robot Deployment [92.48012013825988]
We study the problem of adapting on-the-fly to novel scenarios during deployment.
Our approach, RObust Autonomous Modulation (ROAM), introduces a mechanism based on the perceived value of pre-trained behaviors.
We demonstrate that ROAM enables a robot to adapt rapidly to changes in dynamics both in simulation and on a real Go1 quadruped.
arXiv Detail & Related papers (2023-11-02T08:22:28Z) - "One-Size-Fits-All"? Examining Expectations around What Constitute "Fair" or "Good" NLG System Behaviors [57.63649797577999]
We conduct case studies in which we perturb different types of identity-related language features (names, roles, locations, dialect, and style) in NLG system inputs.
We find that motivations for adaptation include social norms, cultural differences, feature-specific information, and accommodation.
In contrast, motivations for invariance include perspectives that favor prescriptivism, view adaptation as unnecessary or too difficult for NLG systems to do appropriately, and are wary of false assumptions.
arXiv Detail & Related papers (2023-10-23T23:00:34Z) - From Self-Adaptation to Self-Evolution Leveraging the Operational Design
Domain [15.705888799637506]
Self-adaptation has shown to be a viable approach to dealing with changing conditions.
The capabilities of a self-adaptive system are constrained by its operational design domain (ODD)
We provide a definition for ODD and apply it to a self-adaptive system.
arXiv Detail & Related papers (2023-03-27T14:49:07Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Dealing with Drift of Adaptation Spaces in Learning-based Self-Adaptive
Systems using Lifelong Self-Adaptation [10.852698169509006]
We focus on a particularly important challenge for learning-based self-adaptive systems: drift in adaptation spaces.
Drift of adaptation spaces originates from uncertainties, affecting the quality properties of the adaptation options.
We present a novel approach to self-adaptation that enhances learning-based self-adaptive systems with a lifelong ML layer.
arXiv Detail & Related papers (2022-11-04T07:45:48Z) - Deep Learning for Effective and Efficient Reduction of Large Adaptation
Spaces in Self-Adaptive Systems [12.341380735802568]
We present 'Deep Learning for Adaptation Space Reduction Plus' -- DLASeR+ in short.
DLASeR+ offers an extendable learning framework for online adaptation space reduction.
It supports three common types of adaptation goals: threshold, optimization, and set-point goals.
Results show that DLASeR+ is effective with a negligible effect on the realization of the adaptation goals.
arXiv Detail & Related papers (2022-04-13T08:51:06Z) - One Solution is Not All You Need: Few-Shot Extrapolation via Structured
MaxEnt RL [142.36621929739707]
We show that learning diverse behaviors for accomplishing a task can lead to behavior that generalizes to varying environments.
By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations.
arXiv Detail & Related papers (2020-10-27T17:41:57Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z) - Neuro-evolutionary Frameworks for Generalized Learning Agents [1.2691047660244335]
Recent successes of deep learning and deep reinforcement learning have firmly established their statuses as state-of-the-art artificial learning techniques.
Longstanding drawbacks of these approaches point to a need for re-thinking the way such systems are designed and deployed.
We discuss the anticipated improvements from such neuro-evolutionary frameworks, along with the associated challenges.
arXiv Detail & Related papers (2020-02-04T02:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.