Relative Variational Intrinsic Control
- URL: http://arxiv.org/abs/2012.07827v1
- Date: Mon, 14 Dec 2020 18:59:23 GMT
- Title: Relative Variational Intrinsic Control
- Authors: Kate Baumli, David Warde-Farley, Steven Hansen, Volodymyr Mnih
- Abstract summary: Relative Variational Intrinsic Control (RVIC) incentivizes learning skills that are distinguishable in how they change the agent's relationship to its environment.
We show how RVIC skills are more useful than skills discovered by existing methods when used in hierarchical reinforcement learning.
- Score: 11.328970848714919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the absence of external rewards, agents can still learn useful behaviors
by identifying and mastering a set of diverse skills within their environment.
Existing skill learning methods use mutual information objectives to
incentivize each skill to be diverse and distinguishable from the rest.
However, if care is not taken to constrain the ways in which the skills are
diverse, trivially diverse skill sets can arise. To ensure useful skill
diversity, we propose a novel skill learning objective, Relative Variational
Intrinsic Control (RVIC), which incentivizes learning skills that are
distinguishable in how they change the agent's relationship to its environment.
The resulting set of skills tiles the space of affordances available to the
agent. We qualitatively analyze skill behaviors on multiple environments and
show how RVIC skills are more useful than skills discovered by existing methods
when used in hierarchical reinforcement learning.
Related papers
- SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions [48.003320766433966]
This work introduces Skill Discovery from Local Dependencies (Skild)
Skild develops a novel skill learning objective that explicitly encourages the mastering of skills that induce different interactions within an environment.
We evaluate Skild in several domains with challenging, long-horizon sparse reward tasks including a realistic simulated household robot domain.
arXiv Detail & Related papers (2024-10-24T04:01:59Z) - Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning [39.991887534269445]
Disentangled Unsupervised Skill Discovery (DUSDi) is a method for learning disentangled skills that can be efficiently reused to solve downstream tasks.
DUSDi decomposes skills into disentangled components, where each skill component only affects one factor of the state space.
DUSDi successfully learns disentangled skills, and significantly outperforms previous skill discovery methods when it comes to applying the learned skills to solve downstream tasks.
arXiv Detail & Related papers (2024-10-15T04:13:20Z) - Language Guided Skill Discovery [56.84356022198222]
We introduce Language Guided Skill Discovery (LGSD) to maximize semantic diversity between skills.
LGSD takes user prompts as input and outputs a set of semantically distinctive skills.
We demonstrate that LGSD enables legged robots to visit different user-intended areas on a plane by simply changing the prompt.
arXiv Detail & Related papers (2024-06-07T04:25:38Z) - C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for
Physics-based Characters [49.83342243500835]
We present C$cdot$ASE, an efficient framework that learns conditional Adversarial Skill Embeddings for physics-based characters.
C$cdot$ASE divides the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model.
The skill-conditioned imitation learning naturally offers explicit control over the character's skills after training.
arXiv Detail & Related papers (2023-09-20T14:34:45Z) - Unsupervised Discovery of Continuous Skills on a Sphere [15.856188608650228]
We propose a novel method for learning potentially an infinite number of different skills, which is named discovery of continuous skills on a sphere (DISCS)
In DISCS, skills are learned by maximizing mutual information between skills and states, and each skill corresponds to a continuous value on a sphere.
Because the representations of skills in DISCS are continuous, infinitely diverse skills could be learned.
arXiv Detail & Related papers (2023-05-21T06:29:41Z) - Behavior Contrastive Learning for Unsupervised Skill Discovery [75.6190748711826]
We propose a novel unsupervised skill discovery method through contrastive learning among behaviors.
Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill.
Our method implicitly increases the state entropy to obtain better state coverage.
arXiv Detail & Related papers (2023-05-08T06:02:11Z) - Controlled Diversity with Preference : Towards Learning a Diverse Set of
Desired Skills [15.187171070594935]
We propose Controlled Diversity with Preference (CDP), a collaborative human-guided mechanism for an agent to learn a set of skills that is diverse as well as desirable.
The key principle is to restrict the discovery of skills to those regions that are deemed to be desirable as per a preference model trained using human preference labels on trajectory pairs.
We evaluate our approach on 2D navigation and Mujoco environments and demonstrate the ability to discover diverse, yet desirable skills.
arXiv Detail & Related papers (2023-03-07T03:37:47Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.