Curriculum Learning and Imitation Learning for Model-free Control on
Financial Time-series
- URL: http://arxiv.org/abs/2311.13326v4
- Date: Sat, 13 Jan 2024 03:53:24 GMT
- Title: Curriculum Learning and Imitation Learning for Model-free Control on
Financial Time-series
- Authors: Woosung Koh, Insu Choi, Yuntae Jang, Gimin Kang, Woo Chang Kim
- Abstract summary: Curriculum learning and imitation learning have been leveraged extensively in the robotics domain.
We theoretically and empirically explore these approaches in a representative control task over complex time-series data.
Our findings reveal that curriculum learning should be considered a novel direction in improving control-task performance.
- Score: 0.6562256987706128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Curriculum learning and imitation learning have been leveraged extensively in
the robotics domain. However, minimal research has been done on leveraging
these ideas on control tasks over highly stochastic time-series data. Here, we
theoretically and empirically explore these approaches in a representative
control task over complex time-series data. We implement the fundamental ideas
of curriculum learning via data augmentation, while imitation learning is
implemented via policy distillation from an oracle. Our findings reveal that
curriculum learning should be considered a novel direction in improving
control-task performance over complex time-series. Our ample random-seed
out-sample empirics and ablation studies are highly encouraging for curriculum
learning for time-series control. These findings are especially encouraging as
we tune all overlapping hyperparameters on the baseline -- giving an advantage
to the baseline. On the other hand, we find that imitation learning should be
used with caution.
Related papers
- Reverse Forward Curriculum Learning for Extreme Sample and Demonstration Efficiency in Reinforcement Learning [17.092640837991883]
Reinforcement learning (RL) presents a promising framework to learn policies through environment interaction.
One direction includes augmenting RL with offline data demonstrating desired tasks, but past work often require a lot of high-quality demonstration data.
We show how the combination of a reverse curriculum and forward curriculum in our method, RFCL, enables significant improvements in demonstration and sample efficiency.
arXiv Detail & Related papers (2024-05-06T11:33:12Z) - Label-efficient Time Series Representation Learning: A Review [19.218833228063392]
Label-efficient time series representation learning is crucial for deploying deep learning models in real-world applications.
To address the scarcity of labeled time series data, various strategies, e.g., transfer learning, self-supervised learning, and semi-supervised learning, have been developed.
We introduce a novel taxonomy for the first time, categorizing existing approaches as in-domain or cross-domain, based on their reliance on external data sources.
arXiv Detail & Related papers (2023-02-13T15:12:15Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - An Analytical Theory of Curriculum Learning in Teacher-Student Networks [10.303947049948107]
In humans and animals, curriculum learning is critical to rapid learning and effective pedagogy.
In machine learning, curricula are not widely used and empirically often yield only moderate benefits.
arXiv Detail & Related papers (2021-06-15T11:48:52Z) - Curriculum Learning: A Survey [65.31516318260759]
Curriculum learning strategies have been successfully employed in all areas of machine learning.
We construct a taxonomy of curriculum learning approaches by hand, considering various classification criteria.
We build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm.
arXiv Detail & Related papers (2021-01-25T20:08:32Z) - When Do Curricula Work? [26.072472732516335]
ordered learning has been suggested as improvements to the standard i.i.d. training.
We conduct experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum.
We find that curricula have only marginal benefits, and that randomly ordered samples perform as well or better than curricula and anti-curricula.
arXiv Detail & Related papers (2020-12-05T19:41:30Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.