On Simple Reactive Neural Networks for Behaviour-Based Reinforcement
Learning
- URL: http://arxiv.org/abs/2001.07973v2
- Date: Fri, 29 May 2020 13:33:20 GMT
- Title: On Simple Reactive Neural Networks for Behaviour-Based Reinforcement
Learning
- Authors: Ameya Pore and Gerardo Aragon-Camarasa
- Abstract summary: We present a behaviour-based reinforcement learning approach, inspired by Brook's subsumption architecture.
Our working assumption is that a pick and place robotic task can be simplified by leveraging domain knowledge of a robotics developer.
Our approach learns the pick and place task in 8,000 episodes, which represents a drastic reduction in the number of training episodes required by an end-to-end approach.
- Score: 5.482532589225552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a behaviour-based reinforcement learning approach, inspired by
Brook's subsumption architecture, in which simple fully connected networks are
trained as reactive behaviours. Our working assumption is that a pick and place
robotic task can be simplified by leveraging domain knowledge of a robotics
developer to decompose and train such reactive behaviours; namely, approach,
grasp, and retract. Then the robot autonomously learns how to combine them via
an Actor-Critic architecture. The Actor-Critic policy is to determine the
activation and inhibition mechanisms of the reactive behaviours in a particular
temporal sequence. We validate our approach in a simulated robot environment
where the task is picking a block and taking it to a target position while
orienting the gripper from a top grasp. The latter represents an extra
degree-of-freedom of which current end-to-end reinforcement learning fail to
generalise. Our findings suggest that robotic learning can be more effective if
each behaviour is learnt in isolation and then combined them to accomplish the
task. That is, our approach learns the pick and place task in 8,000 episodes,
which represents a drastic reduction in the number of training episodes
required by an end-to-end approach and the existing state-of-the-art
algorithms.
Related papers
- Bidirectional Progressive Neural Networks with Episodic Return Progress
for Emergent Task Sequencing and Robotic Skill Transfer [1.7205106391379026]
We introduce a novel multi-task reinforcement learning framework named Episodic Return Progress with Bidirectional Progressive Neural Networks (ERP-BPNN)
The proposed ERP-BPNN model learns in a human-like interleaved manner by (2) autonomous task switching based on a novel intrinsic motivation signal.
We show that ERP-BPNN achieves faster cumulative convergence and improves performance in all metrics considered among morphologically different robots compared to the baselines.
arXiv Detail & Related papers (2024-03-06T19:17:49Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Automating Reinforcement Learning with Example-based Resets [19.86233948960312]
Existing reinforcement learning algorithms assume an episodic setting in which the agent resets to a fixed initial state distribution at the end of each episode.
We propose an extension to conventional reinforcement learning towards greater autonomy by introducing an additional agent that learns to reset in a self-supervised manner.
We apply our method to learn from scratch on a suite of simulated and real-world continuous control tasks and demonstrate that the reset agent successfully learns to reduce manual resets.
arXiv Detail & Related papers (2022-04-05T08:12:42Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - DREAM Architecture: a Developmental Approach to Open-Ended Learning in
Robotics [44.62475518267084]
We present a developmental cognitive architecture to bootstrap this redescription process stage by stage, build new state representations with appropriate motivations, and transfer the acquired knowledge across domains or tasks or even across robots.
arXiv Detail & Related papers (2020-05-13T09:29:40Z) - Thinking While Moving: Deep Reinforcement Learning with Concurrent
Control [122.49572467292293]
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system.
Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed.
arXiv Detail & Related papers (2020-04-13T17:49:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.