Braxlines: Fast and Interactive Toolkit for RL-driven Behavior
Engineering beyond Reward Maximization
- URL: http://arxiv.org/abs/2110.04686v1
- Date: Sun, 10 Oct 2021 02:41:01 GMT
- Title: Braxlines: Fast and Interactive Toolkit for RL-driven Behavior
Engineering beyond Reward Maximization
- Authors: Shixiang Shane Gu, Manfred Diaz, Daniel C. Freeman, Hiroki Furuta,
Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin
Coumans, Olivier Bachem
- Abstract summary: In reinforcement learning (RL)-driven approaches, the goal of continuous control is to synthesize desired behaviors.
In this paper, we introduce braxlines, a toolkit for fast and interactive-driven behavior generation beyond simple reward RL.
Our implementations build on a hardware-accelerated Brax simulator in Jax with minimal modifications, enabling behavior within minutes of training.
- Score: 15.215372246434413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of continuous control is to synthesize desired behaviors. In
reinforcement learning (RL)-driven approaches, this is often accomplished
through careful task reward engineering for efficient exploration and running
an off-the-shelf RL algorithm. While reward maximization is at the core of RL,
reward engineering is not the only -- sometimes nor the easiest -- way for
specifying complex behaviors. In this paper, we introduce \braxlines, a toolkit
for fast and interactive RL-driven behavior generation beyond simple reward
maximization that includes Composer, a programmatic API for generating
continuous control environments, and set of stable and well-tested baselines
for two families of algorithms -- mutual information maximization (MiMax) and
divergence minimization (DMin) -- supporting unsupervised skill learning and
distribution sketching as other modes of behavior specification. In addition,
we discuss how to standardize metrics for evaluating these algorithms, which
can no longer rely on simple reward maximization. Our implementations build on
a hardware-accelerated Brax simulator in Jax with minimal modifications,
enabling behavior synthesis within minutes of training. We hope Braxlines can
serve as an interactive toolkit for rapid creation and testing of environments
and behaviors, empowering explosions of future benchmark designs and new modes
of RL-driven behavior generation and their algorithmic research.
Related papers
- Scaling Offline RL via Efficient and Expressive Shortcut Models [13.050231036248338]
offline reinforcement learning (RL) remains challenging due to the iterative nature of their noise sampling processes.<n>We introduce Scalable Offline Reinforcement Learning (SORL), a new offline RL algorithm that leverages shortcut models to scale both training and inference.<n>We demonstrate that SORL achieves strong performance across a range of offline RL tasks and exhibits positive scaling behavior with increased test-time compute.
arXiv Detail & Related papers (2025-05-28T20:59:22Z) - To Code or not to Code? Adaptive Tool Integration for Math Language Models via Expectation-Maximization [30.057052324461534]
We propose a novel framework that synergizes structured exploration (E-step) with off-policy optimization (M-step) to create a self-reinforcing cycle between metacognitive tool-use decisions and evolving capabilities.<n>Our 7B model improves over 11% on MATH500 and 9.4% on AIME without o1-like CoT.
arXiv Detail & Related papers (2025-02-02T06:32:23Z) - MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization [91.80034860399677]
Reinforcement learning algorithms aim to balance exploiting the current best strategy with exploring new options that could lead to higher rewards.
We introduce a framework, MaxInfoRL, for balancing intrinsic and extrinsic exploration.
We show that our approach achieves sublinear regret in the simplified setting of multi-armed bandits.
arXiv Detail & Related papers (2024-12-16T18:59:53Z) - Adaptive Reward Design for Reinforcement Learning in Complex Robotic Tasks [2.3031174164121127]
We propose a suite of reward functions that incentivize an RL agent to make measurable progress on tasks specified by formulas.
We develop an adaptive reward shaping approach that dynamically updates these reward functions during the learning process.
Experimental results on a range of RL-based robotic tasks demonstrate that the proposed approach is compatible with various RL algorithms.
arXiv Detail & Related papers (2024-12-14T18:04:18Z) - Continuous Control with Coarse-to-fine Reinforcement Learning [15.585706638252441]
We present a framework that trains RL agents to zoom-into a continuous action space in a coarse-to-fine manner.
We introduce a concrete, value-based algorithm within the framework called Coarse-to-fine Q-Network (CQN)
CQN robustly learns to solve real-world manipulation tasks within a few minutes of online training.
arXiv Detail & Related papers (2024-07-10T16:04:08Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Reward-Machine-Guided, Self-Paced Reinforcement Learning [30.42334205249944]
We develop a self-paced reinforcement learning algorithm guided by reward machines.
The proposed algorithm achieves optimal behavior reliably even in cases in which existing baselines cannot make any meaningful progress.
It also decreases the curriculum length and reduces the variance in the curriculum generation process by up to one-fourth and four orders of magnitude, respectively.
arXiv Detail & Related papers (2023-05-25T22:13:37Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.