A Framework for Characterizing Novel Environment Transformations in
General Environments
- URL: http://arxiv.org/abs/2305.04315v1
- Date: Sun, 7 May 2023 15:53:07 GMT
- Title: A Framework for Characterizing Novel Environment Transformations in
General Environments
- Authors: Matthew Molineaux, Dustin Dannenhauer, Eric Kildebeck
- Abstract summary: We introduce a formal and theoretical framework for defining and categorizing environment transformations.
We present a new language for describing domains, scenario generators, and transformations.
We offer the first formal and computational set of tests for eight categories of environment transformations.
- Score: 3.0938904602244346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To be robust to surprising developments, an intelligent agent must be able to
respond to many different types of unexpected change in the world. To date,
there are no general frameworks for defining and characterizing the types of
environment changes that are possible. We introduce a formal and theoretical
framework for defining and categorizing environment transformations, changes to
the world an agent inhabits. We introduce two types of environment
transformation: R-transformations which modify environment dynamics and
T-transformations which modify the generation process that produces scenarios.
We present a new language for describing domains, scenario generators, and
transformations, called the Transformation and Simulator Abstraction Language
(T-SAL), and a logical formalism that rigorously defines these concepts. Then,
we offer the first formal and computational set of tests for eight categories
of environment transformations. This domain-independent framework paves the way
for describing unambiguous classes of novelty, constrained and
domain-independent random generation of environment transformations,
replication of environment transformation studies, and fair evaluation of agent
robustness.
Related papers
- Towards Generalizable Reinforcement Learning via Causality-Guided Self-Adaptive Representations [22.6449779859417]
General intelligence requires quick adaption across tasks.
In this paper, we explore a wider range of scenarios where not only the distribution but also the environment spaces may change.
We introduce a causality-guided self-adaptive representation-based approach, called CSR, that equips the agent to generalize effectively.
arXiv Detail & Related papers (2024-07-30T08:48:49Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Enhancing Evolving Domain Generalization through Dynamic Latent
Representations [47.3810472814143]
We propose a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
MISTS learns both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information.
arXiv Detail & Related papers (2024-01-16T16:16:42Z) - Curriculum Reinforcement Learning via Morphology-Environment
Co-Evolution [46.27211830466317]
We optimize an RL agent and its morphology through morphology-environment co-evolution''
Instead of hand-crafting the curriculum, we train two policies to automatically change the morphology and the environment.
arXiv Detail & Related papers (2023-09-21T22:58:59Z) - An Adaptive Deep RL Method for Non-Stationary Environments with
Piecewise Stable Context [109.49663559151377]
Existing works on adaptation to unknown environment contexts either assume the contexts are the same for the whole episode or assume the context variables are Markovian.
In this paper, we propose a textittextbfSegmented textbfContext textbfBelief textbfAugmented textbfDeep(SeCBAD) RL method.
Our method can jointly infer the belief distribution over latent context with the posterior over segment length and perform more accurate belief context inference with observed data within the current
arXiv Detail & Related papers (2022-12-24T13:43:39Z) - REPTILE: A Proactive Real-Time Deep Reinforcement Learning Self-adaptive
Framework [0.6335848702857039]
A general framework is proposed to support the development of software systems that are able to adapt their behaviour according to the operating environment changes.
The proposed approach, named REPTILE, works in a complete proactive manner and relies on Deep Reinforcement Learning-based agents to react to events.
In our framework, two types of novelties are taken into account: those related to the context/environment and those related to the physical architecture itself.
The framework, predicting those novelties before their occurrence, extracts time-changing models of the environment and uses a suitable Markov Decision Process to deal with the real-time setting.
arXiv Detail & Related papers (2022-03-28T12:38:08Z) - AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning [18.269412736181852]
We propose a principled framework for adaptive RL, called AdaRL, that adapts reliably to changes across domains.
We show that AdaRL can adapt the policy with only a few samples without further policy optimization in the target domain.
We illustrate the efficacy of AdaRL through a series of experiments that allow for changes in different components of Cartpole and Atari games.
arXiv Detail & Related papers (2021-07-06T16:56:25Z) - Emergent Complexity and Zero-shot Transfer via Unsupervised Environment
Design [121.73425076217471]
We propose Unsupervised Environment Design (UED), where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED)
Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.
arXiv Detail & Related papers (2020-12-03T17:37:01Z) - Variational Transformers for Diverse Response Generation [71.53159402053392]
Variational Transformer (VT) is a variational self-attentive feed-forward sequence model.
VT combines the parallelizability and global receptive field computation of the Transformer with the variational nature of the CVAE.
We explore two types of VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of finegrained latent variables.
arXiv Detail & Related papers (2020-03-28T07:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.