On the Importance of Critical Period in Multi-stage Reinforcement
Learning
- URL: http://arxiv.org/abs/2208.04832v1
- Date: Tue, 9 Aug 2022 15:17:22 GMT
- Title: On the Importance of Critical Period in Multi-stage Reinforcement
Learning
- Authors: Junseok Park, Inwoo Hwang, Min Whoo Lee, Hyunseok Oh, Minsu Lee,
Youngki Lee, Byoung-Tak Zhang
- Abstract summary: In recent studies, an AI agent exhibited a learning period similar to human's critical period.
We propose multi-stage reinforcement learning to emphasize finding appropriate stimulus.
- Score: 18.610737380842494
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The initial years of an infant's life are known as the critical period,
during which the overall development of learning performance is significantly
impacted due to neural plasticity. In recent studies, an AI agent, with a deep
neural network mimicking mechanisms of actual neurons, exhibited a learning
period similar to human's critical period. Especially during this initial
period, the appropriate stimuli play a vital role in developing learning
ability. However, transforming human cognitive bias into an appropriate shaping
reward is quite challenging, and prior works on critical period do not focus on
finding the appropriate stimulus. To take a step further, we propose
multi-stage reinforcement learning to emphasize finding ``appropriate stimulus"
around the critical period. Inspired by humans' early cognitive-developmental
stage, we use multi-stage guidance near the critical period, and demonstrate
the appropriate shaping reward (stage-2 guidance) in terms of the AI agent's
performance, efficiency, and stability.
Related papers
- Critical Learning Periods Emerge Even in Deep Linear Networks [102.89011295243334]
Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations.
Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems.
arXiv Detail & Related papers (2023-08-23T16:01:50Z) - Critical Learning Periods for Multisensory Integration in Deep Networks [112.40005682521638]
We show that the ability of a neural network to integrate information from diverse sources hinges critically on being exposed to properly correlated signals during the early phases of training.
We show that critical periods arise from the complex and unstable early transient dynamics, which are decisive of final performance of the trained system and their learned representations.
arXiv Detail & Related papers (2022-10-06T23:50:38Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Toddler-Guidance Learning: Impacts of Critical Period on Multimodal AI
Agents [18.610737380842494]
We adapt the notion of critical periods to learning in AI agents and investigate the critical period in the virtual environment for AI agents.
We build up a toddler-like environment with VECA toolkit to mimic human toddlers' learning characteristics.
We evaluate the impact of critical periods on AI agents from two perspectives: how and when they are guided best in both uni- and multimodal learning.
arXiv Detail & Related papers (2022-01-12T10:57:40Z) - Persistent Reinforcement Learning via Subgoal Curricula [114.83989499740193]
Value-accelerated Persistent Reinforcement Learning (VaPRL) generates a curriculum of initial states.
VaPRL reduces the interventions required by three orders of magnitude compared to episodic reinforcement learning.
arXiv Detail & Related papers (2021-07-27T16:39:45Z) - Deep Multi-task Learning for Depression Detection and Prediction in
Longitudinal Data [50.02223091927777]
Depression is among the most prevalent mental disorders, affecting millions of people of all ages globally.
Machine learning techniques have shown effective in enabling automated detection and prediction of depression for early intervention and treatment.
We introduce a novel deep multi-task recurrent neural network to tackle this challenge, in which depression classification is jointly optimized with two auxiliary tasks.
arXiv Detail & Related papers (2020-12-05T05:14:14Z) - Towards Social & Engaging Peer Learning: Predicting Backchanneling and
Disengagement in Children [10.312968200748116]
Social robots and interactive computer applications have the potential to foster early language development in young children by acting as peer learning companions.
We develop models to predict whether the listener will lose attention (Listener Disengagement Prediction, LDP) and the extent to which a robot should generate backchanneling responses (Backchanneling Extent Prediction, BEP)
Our experiments revealed the utility of multimodal features such as pupil dilation, blink rate, head movements, facial action units which have never been used before.
arXiv Detail & Related papers (2020-07-22T11:16:42Z) - Understanding the Role of Training Regimes in Continual Learning [51.32945003239048]
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially.
We study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks' local minima.
arXiv Detail & Related papers (2020-06-12T06:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.