Adapting to Unseen Environments through Explicit Representation of
Context
- URL: http://arxiv.org/abs/2002.05640v2
- Date: Mon, 29 Jun 2020 22:04:16 GMT
- Title: Adapting to Unseen Environments through Explicit Representation of
Context
- Authors: Cem C. Tutum and Risto Miikkulainen
- Abstract summary: In order to deploy autonomous agents to domains such as autonomous driving, infrastructure management, health care, and finance, they must be able to adapt safely to unseen situations.
This paper proposes a principled approach where a context module is coevolved with a skill module.
The Context+Skill approach leads to significantly more robust behavior in environments with previously unseen effects.
- Score: 16.8615211682877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to deploy autonomous agents to domains such as autonomous driving,
infrastructure management, health care, and finance, they must be able to adapt
safely to unseen situations. The current approach in constructing such agents
is to try to include as much variation into training as possible, and then
generalize within the possible variations. This paper proposes a principled
approach where a context module is coevolved with a skill module. The context
module recognizes the variation and modulates the skill module so that the
entire system performs well in unseen situations. The approach is evaluated in
a challenging version of the Flappy Bird game where the effects of the actions
vary over time. The Context+Skill approach leads to significantly more robust
behavior in environments with previously unseen effects. Such a principled
generalization ability is essential in deploying autonomous agents in real
world tasks, and can serve as a foundation for continual learning as well.
Related papers
- MOSS: Enabling Code-Driven Evolution and Context Management for AI Agents [7.4159044558995335]
We introduce MOSS (llM-oriented Operating System Simulation), a novel framework that integrates code generation with a dynamic context management system.
At its core, the framework employs an Inversion of Control container in conjunction with decorators to enforce the least knowledge principle.
We show how this framework can enhance the efficiency and capabilities of agent development and highlight its advantages in moving towards Turing-complete agents.
arXiv Detail & Related papers (2024-09-24T14:30:21Z) - Towards Generalizable Reinforcement Learning via Causality-Guided Self-Adaptive Representations [22.6449779859417]
General intelligence requires quick adaption across tasks.
In this paper, we explore a wider range of scenarios where not only the distribution but also the environment spaces may change.
We introduce a causality-guided self-adaptive representation-based approach, called CSR, that equips the agent to generalize effectively.
arXiv Detail & Related papers (2024-07-30T08:48:49Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Dynamics Generalisation in Reinforcement Learning via Adaptive
Context-Aware Policies [13.410372954752496]
We present an investigation into how context should be incorporated into behaviour learning to improve generalisation.
We introduce a neural network architecture, the Decision Adapter, which generates the weights of an adapter module and conditions the behaviour of an agent on the context information.
We show that the Decision Adapter is a useful generalisation of a previously proposed architecture and empirically demonstrate that it results in superior generalisation performance.
arXiv Detail & Related papers (2023-10-25T14:50:05Z) - One Solution is Not All You Need: Few-Shot Extrapolation via Structured
MaxEnt RL [142.36621929739707]
We show that learning diverse behaviors for accomplishing a task can lead to behavior that generalizes to varying environments.
By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations.
arXiv Detail & Related papers (2020-10-27T17:41:57Z) - Self-Supervised Policy Adaptation during Deployment [98.25486842109936]
Self-supervision allows the policy to continue training after deployment without using any rewards.
Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom.
Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments.
arXiv Detail & Related papers (2020-07-08T17:56:27Z) - Off-Dynamics Reinforcement Learning: Training for Transfer with Domain
Classifiers [138.68213707587822]
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning.
We show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.
Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics.
arXiv Detail & Related papers (2020-06-24T17:47:37Z) - Generalization of Agent Behavior through Explicit Representation of
Context [14.272883554753323]
In order to deploy autonomous agents in digital interactive environments, they must be able to act robustly in unseen situations.
This paper proposes a principled approach where a context module is coevolved with a skill module in the game.
The approach is evaluated in the Flappy Bird and LunarLander video games, as well as in the CARLA autonomous driving simulation.
arXiv Detail & Related papers (2020-06-18T04:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.