Selective Replay Enhances Learning in Online Continual Analogical
Reasoning
- URL: http://arxiv.org/abs/2103.03987v1
- Date: Sat, 6 Mar 2021 00:04:10 GMT
- Title: Selective Replay Enhances Learning in Online Continual Analogical
Reasoning
- Authors: Tyler L. Hayes and Christopher Kanan
- Abstract summary: In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting.
While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied.
- Score: 44.794321821598395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In continual learning, a system learns from non-stationary data streams or
batches without catastrophic forgetting. While this problem has been heavily
studied in supervised image classification and reinforcement learning,
continual learning in neural networks designed for abstract reasoning has not
yet been studied. Here, we study continual learning of analogical reasoning.
Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are
commonly used to measure non-verbal abstract reasoning in humans, and recently
offline neural networks for the RPM problem have been proposed. In this paper,
we establish experimental baselines, protocols, and forward and backward
transfer metrics to evaluate continual learners on RPMs. We employ experience
replay to mitigate catastrophic forgetting. Prior work using replay for image
classification tasks has found that selectively choosing the samples to replay
offers little, if any, benefit over random selection. In contrast, we find that
selective replay can significantly outperform random selection for the RPM
task.
Related papers
- Replay Can Provably Increase Forgetting [24.538643224479515]
A critical challenge for continual learning is forgetting, where the performance on previously learned tasks decreases as new tasks are introduced.<n>One of the commonly used techniques to mitigate forgetting, sample replay, has been shown empirically to reduce forgetting.<n>We show that even in a noiseless setting, forgetting can be non-monotonic with respect to the number of replay samples.
arXiv Detail & Related papers (2025-06-04T18:46:23Z) - Watch Your Step: Optimal Retrieval for Continual Learning at Scale [1.7265013728931]
In continual learning, a model learns incrementally over time while minimizing interference between old and new tasks.
One of the most widely used approaches in continual learning is referred to as replay.
We propose a framework for evaluating selective retrieval strategies, categorized by simple, independent class- and sample-selective primitives.
We propose a set of strategies to prevent duplicate replays and explore whether new samples with low loss values can be learned without replay.
arXiv Detail & Related papers (2024-04-16T17:35:35Z) - Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Detachedly Learn a Classifier for Class-Incremental Learning [11.865788374587734]
We present an analysis that the failure of vanilla experience replay (ER) comes from unnecessary re-learning of previous tasks and incompetence to distinguish current task from the previous ones.
We propose a novel replay strategy task-aware experience replay.
Experimental results show our method outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T01:35:44Z) - Multi-Viewpoint and Multi-Evaluation with Felicitous Inductive Bias
Boost Machine Abstract Reasoning Ability [6.33280703577189]
We show that end-to-end neural networks embodied with inductive bias, intentionally design or serendipitously match, can solve RPM problems.
Our work also reveals that multi-viewpoint with multi-evaluation is a key learning strategy for successful reasoning.
We hope that these results will serve as inspections of AI's ability beyond perception and toward abstract reasoning.
arXiv Detail & Related papers (2022-10-26T17:15:44Z) - Reward Uncertainty for Exploration in Preference-based Reinforcement
Learning [88.34958680436552]
We present an exploration method specifically for preference-based reinforcement learning algorithms.
Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward.
Our experiments show that exploration bonus from uncertainty in learned reward improves both feedback- and sample-efficiency of preference-based RL algorithms.
arXiv Detail & Related papers (2022-05-24T23:22:10Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Unsupervised Abstract Reasoning for Raven's Problem Matrices [9.278113063631643]
Raven's Progressive Matrices ( RPM) is highly correlated with human intelligence.
We propose the first unsupervised learning method for solving RPM problems.
Our method even outperforms some of the supervised approaches.
arXiv Detail & Related papers (2021-09-21T07:44:58Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.