Lifelong Wandering: A realistic few-shot online continual learning
setting
- URL: http://arxiv.org/abs/2206.07932v1
- Date: Thu, 16 Jun 2022 05:39:08 GMT
- Title: Lifelong Wandering: A realistic few-shot online continual learning
setting
- Authors: Mayank Lunayach, James Smith, Zsolt Kira
- Abstract summary: Online few-shot learning describes a setting where models are trained and evaluated on a stream of data while learning emerging classes.
While prior work in this setting has achieved very promising performance on instance classification when learning from data-streams composed of a single indoor environment, we propose to extend this setting to consider object classification on a series of several indoor environments.
In this work, we benchmark several existing methods and adapted baselines within our setting, and show there exists a trade-off between catastrophic forgetting and online performance.
- Score: 23.134299907227796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online few-shot learning describes a setting where models are trained and
evaluated on a stream of data while learning emerging classes. While prior work
in this setting has achieved very promising performance on instance
classification when learning from data-streams composed of a single indoor
environment, we propose to extend this setting to consider object
classification on a series of several indoor environments, which is likely to
occur in applications such as robotics. Importantly, our setting, which we
refer to as online few-shot continual learning, injects the well-studied issue
of catastrophic forgetting into the few-shot online learning paradigm. In this
work, we benchmark several existing methods and adapted baselines within our
setting, and show there exists a trade-off between catastrophic forgetting and
online performance. Our findings motivate the need for future work in this
setting, which can achieve better online performance without catastrophic
forgetting.
Related papers
- Learning Goal-Conditioned Policies Offline with Self-Supervised Reward
Shaping [94.89128390954572]
We propose a novel self-supervised learning phase on the pre-collected dataset to understand the structure and the dynamics of the model.
We evaluate our method on three continuous control tasks, and show that our model significantly outperforms existing approaches.
arXiv Detail & Related papers (2023-01-05T15:07:10Z) - Bypassing Logits Bias in Online Class-Incremental Learning with a
Generative Framework [15.345043222622158]
We focus on online class-incremental learning setting in which new classes emerge over time.
Almost all existing methods are replay-based with a softmax classifier.
We propose a novel generative framework based on the feature space.
arXiv Detail & Related papers (2022-05-19T06:54:20Z) - Continual Predictive Learning from Videos [100.27176974654559]
We study a new continual learning problem in the context of video prediction.
We propose the continual predictive learning (CPL) approach, which learns a mixture world model via predictive experience replay.
We construct two new benchmarks based on RoboNet and KTH, in which different tasks correspond to different physical robotic environments or human actions.
arXiv Detail & Related papers (2022-04-12T08:32:26Z) - Continual learning: a feature extraction formalization, an efficient
algorithm, and fundamental obstructions [30.61165302635335]
Continual learning is an emerging paradigm in machine learning.
In this paper, we propose a framework for continual learning through the framework of feature extraction.
arXiv Detail & Related papers (2022-03-27T20:20:41Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Knowledge Consolidation based Class Incremental Online Learning with
Limited Data [41.87919913719975]
We propose a novel approach for class incremental online learning in a limited data setting.
We learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting.
arXiv Detail & Related papers (2021-06-12T15:18:29Z) - Online Coreset Selection for Rehearsal-based Continual Learning [65.85595842458882]
In continual learning, we store a subset of training examples (coreset) to be replayed later to alleviate catastrophic forgetting.
We propose Online Coreset Selection (OCS), a simple yet effective method that selects the most representative and informative coreset at each iteration.
Our proposed method maximizes the model's adaptation to a target dataset while selecting high-affinity samples to past tasks, which directly inhibits catastrophic forgetting.
arXiv Detail & Related papers (2021-06-02T11:39:25Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Wandering Within a World: Online Contextualized Few-Shot Learning [62.28521610606054]
We aim to bridge the gap between typical human and machine-learning environments by extending the standard framework of few-shot learning to an online setting.
We propose a new prototypical few-shot learning based on large scale indoor imagery that mimics the visual experience of an agent wandering within a world.
arXiv Detail & Related papers (2020-07-09T04:05:04Z) - Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition [0.0]
It is necessary to pre-train the model at a "training recording phase" and then adjust it to the new coming data.
We propose a fast continual learning layer at the end of the neuronal network.
arXiv Detail & Related papers (2020-06-12T13:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.