Ecological Reinforcement Learning
- URL: http://arxiv.org/abs/2006.12478v1
- Date: Mon, 22 Jun 2020 17:55:03 GMT
- Title: Ecological Reinforcement Learning
- Authors: John D. Co-Reyes, Suvansh Sanjeev, Glen Berseth, Abhishek Gupta,
Sergey Levine
- Abstract summary: We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
- Score: 76.9893572776141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much of the current work on reinforcement learning studies episodic settings,
where the agent is reset between trials to an initial state distribution, often
with well-shaped reward functions. Non-episodic settings, where the agent must
learn through continuous interaction with the world without resets, and where
the agent receives only delayed and sparse reward signals, is substantially
more difficult, but arguably more realistic considering real-world environments
do not present the learner with a convenient "reset mechanism" and easy reward
shaping. In this paper, instead of studying algorithmic improvements that can
address such non-episodic and sparse reward settings, we instead study the
kinds of environment properties that can make learning under such conditions
easier. Understanding how properties of the environment impact the performance
of reinforcement learning agents can help us to structure our tasks in ways
that make learning tractable. We first discuss what we term "environment
shaping" -- modifications to the environment that provide an alternative to
reward shaping, and may be easier to implement. We then discuss an even simpler
property that we refer to as "dynamism," which describes the degree to which
the environment changes independent of the agent's actions and can be measured
by environment transition entropy. Surprisingly, we find that even this
property can substantially alleviate the challenges associated with
non-episodic RL in sparse reward settings. We provide an empirical evaluation
on a set of new tasks focused on non-episodic learning with sparse rewards.
Through this study, we hope to shift the focus of the community towards
analyzing how properties of the environment can affect learning and the
ultimate type of behavior that is learned via RL.
Related papers
- EvIL: Evolution Strategies for Generalisable Imitation Learning [33.745657379141676]
In imitation learning (IL) expert demonstrations and the environment we want to deploy our learned policy in aren't exactly the same.
Compared to policy-centric approaches to IL like cloning, reward-centric approaches like inverse reinforcement learning (IRL) often better replicate expert behaviour in new environments.
We find that modern deep IL algorithms frequently recover rewards which induce policies far weaker than the expert, even in the same environment the demonstrations were collected in.
We propose a novel evolution-strategies based method EvIL to optimise for a reward-shaping term that speeds up re-training in the target environment.
arXiv Detail & Related papers (2024-06-15T22:46:39Z) - Continuously evolving rewards in an open-ended environment [0.0]
RULE: Reward Updating through Learning and Expectation is tested in a simplified ecosystem-like setting.
The population of entities successfully demonstrate the abandonment of an initially rewarded but ultimately detrimental behaviour.
These adjustment happen through endogenous modification of the entities' underlying reward function, during continuous learning, without external intervention.
arXiv Detail & Related papers (2024-05-02T13:07:56Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Environment Design for Inverse Reinforcement Learning [3.085995273374333]
Current inverse reinforcement learning methods that focus on learning from a single environment can fail to handle slight changes in the environment dynamics.
In our framework, the learner repeatedly interacts with the expert, with the former selecting environments to identify the reward function.
This results in improvements in both sample-efficiency and robustness, as we show experimentally, for both exact and approximate inference.
arXiv Detail & Related papers (2022-10-26T18:31:17Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Evolutionary Reinforcement Learning Dynamics with Irreducible
Environmental Uncertainty [0.0]
We derive and present evolutionary reinforcement learning dynamics in which the agents are irreducibly uncertain about the current state of the environment.
We find that irreducible environmental uncertainty can lead to better learning outcomes faster, stabilize the learning process and overcome social dilemmas.
However, we do also find that partial observability may cause worse learning outcomes, for example, in the form of a catastrophic limit cycle.
arXiv Detail & Related papers (2021-09-15T12:50:58Z) - Emergent Complexity and Zero-shot Transfer via Unsupervised Environment
Design [121.73425076217471]
We propose Unsupervised Environment Design (UED), where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED)
Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.
arXiv Detail & Related papers (2020-12-03T17:37:01Z) - Environment Shaping in Reinforcement Learning using State Abstraction [63.444831173608605]
We propose a novel framework of emphenvironment shaping using state abstraction.
Our key idea is to compress the environment's large state space with noisy signals to an abstracted space.
We show that the agent's policy learnt in the shaped environment preserves near-optimal behavior in the original environment.
arXiv Detail & Related papers (2020-06-23T17:00:22Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.