Active Class Selection for Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2307.02641v1
- Date: Wed, 5 Jul 2023 20:16:57 GMT
- Title: Active Class Selection for Few-Shot Class-Incremental Learning
- Authors: Christopher McClurg, Ali Ayub, Harsh Tyagi, Sarah M. Rajtmajer, and
Alan R. Wagner
- Abstract summary: For real-world applications, robots will need to continually learn in their environments through limited interactions with their users.
We develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment.
- Score: 14.386434861320023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For real-world applications, robots will need to continually learn in their
environments through limited interactions with their users. Toward this,
previous works in few-shot class incremental learning (FSCIL) and active class
selection (ACS) have achieved promising results but were tested in constrained
setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to
develop a novel framework that can allow an autonomous agent to continually
learn new objects by asking its users to label only a few of the most
informative objects in the environment. To this end, we build on a
state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS
literature. We term this model Few-shot Incremental Active class SeleCtiOn
(FIASco). We further integrate a potential field-based navigation technique
with our model to develop a complete framework that can allow an agent to
process and reason on its sensory data through the FIASco model, navigate
towards the most informative object in the environment, gather data about the
object through its sensors and incrementally update the FIASco model.
Experimental results on a simulated agent and a real robot show the
significance of our approach for long-term real-world robotics applications.
Related papers
- Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Model Share AI: An Integrated Toolkit for Collaborative Machine Learning
Model Development, Provenance Tracking, and Deployment in Python [0.0]
We introduce Model Share AI (AIMS), an easy-to-use MLOps platform designed to streamline collaborative model development, model provenance tracking, and model deployment.
AIMS features collaborative project spaces and a standardized model evaluation process that ranks model submissions based on their performance on unseen evaluation data.
AIMS allows users to deploy ML models built in Scikit-Learn, Keras, PyTorch, and ONNX into live REST APIs and automatically generated web apps.
arXiv Detail & Related papers (2023-09-27T15:24:39Z) - CBCL-PR: A Cognitively Inspired Model for Class-Incremental Learning in
Robotics [22.387008072671005]
We present a novel framework inspired by theories of concept learning in the hippocampus and the neocortex.
Our framework represents object classes in the form of sets of clusters and stores them in memory.
Our approach is evaluated on two object classification datasets resulting in state-of-the-art (SOTA) performance for class-incremental learning and FSIL.
arXiv Detail & Related papers (2023-07-31T23:34:27Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Few-Shot Continual Active Learning by a Robot [11.193504036335503]
We develop a framework that allows a CL agent to continually learn new object classes from a few labeled training examples.
We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task.
arXiv Detail & Related papers (2022-10-09T01:52:19Z) - Multimodal Generation of Novel Action Appearances for Synthetic-to-Real
Recognition of Activities of Daily Living [25.04517296731092]
Domain shifts, such as appearance changes, are a key challenge in real-world applications of activity recognition models.
We introduce an activity domain generation framework which creates novel ADL appearances from different existing activity modalities.
Our framework computes human poses, heatmaps of body joints, and optical flow maps and uses them alongside the original RGB videos to learn the essence of source domains.
arXiv Detail & Related papers (2022-08-03T08:28:33Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through
Context [13.217582954907234]
We study the problem of designing deep learning agents which can generalize their models of the physical world by building context-aware models.
We present context-aware zero shot learning (CAZSL, pronounced as casual) models, an approach utilizing a Siamese network, embedding space and regularization based on context variables.
We test our proposed learning algorithm on the recently released Omnipush datatset that allows testing of meta-learning capabilities.
arXiv Detail & Related papers (2020-03-26T01:21:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.