Active Class Selection for Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2307.02641v1
- Date: Wed, 5 Jul 2023 20:16:57 GMT
- Title: Active Class Selection for Few-Shot Class-Incremental Learning
- Authors: Christopher McClurg, Ali Ayub, Harsh Tyagi, Sarah M. Rajtmajer, and
Alan R. Wagner
- Abstract summary: For real-world applications, robots will need to continually learn in their environments through limited interactions with their users.
We develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment.
- Score: 14.386434861320023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For real-world applications, robots will need to continually learn in their
environments through limited interactions with their users. Toward this,
previous works in few-shot class incremental learning (FSCIL) and active class
selection (ACS) have achieved promising results but were tested in constrained
setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to
develop a novel framework that can allow an autonomous agent to continually
learn new objects by asking its users to label only a few of the most
informative objects in the environment. To this end, we build on a
state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS
literature. We term this model Few-shot Incremental Active class SeleCtiOn
(FIASco). We further integrate a potential field-based navigation technique
with our model to develop a complete framework that can allow an agent to
process and reason on its sensory data through the FIASco model, navigate
towards the most informative object in the environment, gather data about the
object through its sensors and incrementally update the FIASco model.
Experimental results on a simulated agent and a real robot show the
significance of our approach for long-term real-world robotics applications.
Related papers
- Keypoint Abstraction using Large Models for Object-Relative Imitation Learning [78.92043196054071]
Generalization to novel object configurations and instances across diverse tasks and environments is a critical challenge in robotics.
Keypoint-based representations have been proven effective as a succinct representation for essential object capturing features.
We propose KALM, a framework that leverages large pre-trained vision-language models to automatically generate task-relevant and cross-instance consistent keypoints.
arXiv Detail & Related papers (2024-10-30T17:37:31Z) - Robot Utility Models: General Policies for Zero-Shot Deployment in New Environments [26.66666135624716]
We present Robot Utility Models (RUMs), a framework for training and deploying zero-shot robot policies.
RUMs can generalize to new environments without any finetuning.
We train five utility models for opening cabinet doors, opening drawers, picking up napkins, picking up paper bags, and reorienting fallen objects.
arXiv Detail & Related papers (2024-09-09T17:59:50Z) - Cognitive Planning for Object Goal Navigation using Generative AI Models [0.979851640406258]
We present a novel framework for solving the object goal navigation problem that generates efficient exploration strategies.
Our approach enables a robot to navigate unfamiliar environments by leveraging Large Language Models (LLMs) and Large Vision-Language Models (LVLMs)
arXiv Detail & Related papers (2024-03-30T10:54:59Z) - Model Share AI: An Integrated Toolkit for Collaborative Machine Learning
Model Development, Provenance Tracking, and Deployment in Python [0.0]
We introduce Model Share AI (AIMS), an easy-to-use MLOps platform designed to streamline collaborative model development, model provenance tracking, and model deployment.
AIMS features collaborative project spaces and a standardized model evaluation process that ranks model submissions based on their performance on unseen evaluation data.
AIMS allows users to deploy ML models built in Scikit-Learn, Keras, PyTorch, and ONNX into live REST APIs and automatically generated web apps.
arXiv Detail & Related papers (2023-09-27T15:24:39Z) - CBCL-PR: A Cognitively Inspired Model for Class-Incremental Learning in
Robotics [22.387008072671005]
We present a novel framework inspired by theories of concept learning in the hippocampus and the neocortex.
Our framework represents object classes in the form of sets of clusters and stores them in memory.
Our approach is evaluated on two object classification datasets resulting in state-of-the-art (SOTA) performance for class-incremental learning and FSIL.
arXiv Detail & Related papers (2023-07-31T23:34:27Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Few-Shot Continual Active Learning by a Robot [11.193504036335503]
We develop a framework that allows a CL agent to continually learn new object classes from a few labeled training examples.
We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task.
arXiv Detail & Related papers (2022-10-09T01:52:19Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.