On the Reliability and Generalizability of Brain-inspired Reinforcement
Learning Algorithms
- URL: http://arxiv.org/abs/2007.04578v1
- Date: Thu, 9 Jul 2020 06:32:42 GMT
- Title: On the Reliability and Generalizability of Brain-inspired Reinforcement
Learning Algorithms
- Authors: Dongjae Kim and Jee Hang Lee, Jae Hoon Shin, Minsu Abel Yang, Sang Wan
Lee
- Abstract summary: We show that the computational model combining model-based and model-free control, which we term the prefrontal RL, reliably encodes the information of high-level policy that humans learned.
This is the first attempt to formally test the possibility that computational models mimicking the way the brain solves general problems can lead to practical solutions.
- Score: 10.09712608508383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep RL models have shown a great potential for solving various
types of tasks with minimal supervision, several key challenges remain in terms
of learning from limited experience, adapting to environmental changes, and
generalizing learning from a single task. Recent evidence in decision
neuroscience has shown that the human brain has an innate capacity to resolve
these issues, leading to optimism regarding the development of
neuroscience-inspired solutions toward sample-efficient, and generalizable RL
algorithms. We show that the computational model combining model-based and
model-free control, which we term the prefrontal RL, reliably encodes the
information of high-level policy that humans learned, and this model can
generalize the learned policy to a wide range of tasks. First, we trained the
prefrontal RL, and deep RL algorithms on 82 subjects' data, collected while
human participants were performing two-stage Markov decision tasks, in which we
manipulated the goal, state-transition uncertainty and state-space complexity.
In the reliability test, which includes the latent behavior profile and the
parameter recoverability test, we showed that the prefrontal RL reliably
learned the latent policies of the humans, while all the other models failed.
Second, to test the ability to generalize what these models learned from the
original task, we situated them in the context of environmental volatility.
Specifically, we ran large-scale simulations with 10 Markov decision tasks, in
which latent context variables change over time. Our information-theoretic
analysis showed that the prefrontal RL showed the highest level of adaptability
and episodic encoding efficacy. This is the first attempt to formally test the
possibility that computational models mimicking the way the brain solves
general problems can lead to practical solutions to key challenges in machine
learning.
Related papers
- Towards Sample-Efficiency and Generalization of Transfer and Inverse Reinforcement Learning: A Comprehensive Literature Review [50.67937325077047]
This paper is devoted to a comprehensive review of realizing the sample efficiency and generalization of RL algorithms through transfer and inverse reinforcement learning (T-IRL)
Our findings denote that a majority of recent research works have dealt with the aforementioned challenges by utilizing human-in-the-loop and sim-to-real strategies.
Under the IRL structure, training schemes that require a low number of experience transitions and extension of such frameworks to multi-agent and multi-intention problems have been the priority of researchers in recent years.
arXiv Detail & Related papers (2024-11-15T15:18:57Z) - Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning [0.5840945370755134]
We introduce the Progressive Self-Paced Distillation (PSPD) framework, employing an adaptive and progressive pacing and distillation mechanism.
We validate PSPD's efficacy and adaptability across various convolutional neural networks using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.
arXiv Detail & Related papers (2024-07-23T02:26:04Z) - Entropy Regularized Reinforcement Learning with Cascading Networks [9.973226671536041]
Deep RL uses neural networks as function approximators.
One of the major difficulties of RL is the absence of i.i.d. data.
In this work, we challenge the common practices of the (un)supervised learning community of using a fixed neural architecture.
arXiv Detail & Related papers (2022-10-16T10:28:59Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - What deep reinforcement learning tells us about human motor learning and
vice-versa [24.442174952832108]
We show how recent deep RL methods correspond to the dominant motor learning framework in neuroscience, error-based learning.
We introduce a novel deep RL algorithm: model-based deterministic policy gradients (MB-DPG)
MB-DPG draws inspiration from error-based learning by explicitly relying on the observed outcome of actions.
arXiv Detail & Related papers (2022-08-23T11:56:49Z) - Training and Evaluation of Deep Policies using Reinforcement Learning
and Generative Models [67.78935378952146]
GenRL is a framework for solving sequential decision-making problems.
It exploits the combination of reinforcement learning and latent variable generative models.
We experimentally determine the characteristics of generative models that have most influence on the performance of the final policy training.
arXiv Detail & Related papers (2022-04-18T22:02:32Z) - REIN-2: Giving Birth to Prepared Reinforcement Learning Agents Using
Reinforcement Learning Agents [0.0]
In this paper, we introduce a meta-learning scheme that shifts the objective of learning to solve a task into the objective of learning to learn to solve a task (or a set of tasks)
Our model, named REIN-2, is a meta-learning scheme formulated within the RL framework, the goal of which is to develop a meta-RL agent that learns how to produce other RL agents.
Compared to traditional state-of-the-art Deep RL algorithms, experimental results show remarkable performance of our model in popular OpenAI Gym environments.
arXiv Detail & Related papers (2021-10-11T10:13:49Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Efficient Model-Based Reinforcement Learning through Optimistic Policy
Search and Planning [93.1435980666675]
We show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms.
Our experiments demonstrate that optimistic exploration significantly speeds-up learning when there are penalties on actions.
arXiv Detail & Related papers (2020-06-15T18:37:38Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.