Few-Shot Continual Active Learning by a Robot
- URL: http://arxiv.org/abs/2210.04137v2
- Date: Wed, 12 Oct 2022 20:39:14 GMT
- Title: Few-Shot Continual Active Learning by a Robot
- Authors: Ali Ayub and Carter Fendley
- Abstract summary: We develop a framework that allows a CL agent to continually learn new object classes from a few labeled training examples.
We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task.
- Score: 11.193504036335503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider a challenging but realistic continual learning
(CL) problem, Few-Shot Continual Active Learning (FoCAL), where a CL agent is
provided with unlabeled data for a new or a previously learned task in each
increment and the agent only has limited labeling budget available. Towards
this, we build on the continual learning and active learning literature and
develop a framework that can allow a CL agent to continually learn new object
classes from a few labeled training examples. Our framework represents each
object class using a uniform Gaussian mixture model (GMM) and uses
pseudo-rehearsal to mitigate catastrophic forgetting. The framework also uses
uncertainty measures on the Gaussian representations of the previously learned
classes to find the most informative samples to be labeled in an increment. We
evaluate our approach on the CORe-50 dataset and on a real humanoid robot for
the object classification task. The results show that our approach not only
produces state-of-the-art results on the dataset but also allows a real robot
to continually learn unseen objects in a real environment with limited labeling
supervision provided by its user.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Incremental Object Detection with CLIP [36.478530086163744]
We propose a visual-language model such as CLIP to generate text feature embeddings for different class sets.
We then employ super-classes to replace the unavailable novel classes in the early learning stage to simulate the incremental scenario.
We incorporate the finely recognized detection boxes as pseudo-annotations into the training process, thereby further improving the detection performance.
arXiv Detail & Related papers (2023-10-13T01:59:39Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - CBCL-PR: A Cognitively Inspired Model for Class-Incremental Learning in
Robotics [22.387008072671005]
We present a novel framework inspired by theories of concept learning in the hippocampus and the neocortex.
Our framework represents object classes in the form of sets of clusters and stores them in memory.
Our approach is evaluated on two object classification datasets resulting in state-of-the-art (SOTA) performance for class-incremental learning and FSIL.
arXiv Detail & Related papers (2023-07-31T23:34:27Z) - Active Class Selection for Few-Shot Class-Incremental Learning [14.386434861320023]
For real-world applications, robots will need to continually learn in their environments through limited interactions with their users.
We develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment.
arXiv Detail & Related papers (2023-07-05T20:16:57Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - TIDo: Source-free Task Incremental Learning in Non-stationary
Environments [0.0]
Updating a model-based agent to learn new target tasks requires us to store past training data.
Few-shot task incremental learning methods overcome the limitation of labeled target datasets.
We propose a one-shot task incremental learning approach that can adapt to non-stationary source and target tasks.
arXiv Detail & Related papers (2023-01-28T02:19:45Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Continual Learning From Unlabeled Data Via Deep Clustering [7.704949298975352]
Continual learning aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives.
We introduce a new framework to make continual learning feasible in unsupervised mode by using pseudo label obtained from cluster assignments to update model.
arXiv Detail & Related papers (2021-04-14T23:46:17Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.