Foundation Model-Powered 3D Few-Shot Class Incremental Learning via Training-free Adaptor
- URL: http://arxiv.org/abs/2410.09237v1
- Date: Fri, 11 Oct 2024 20:23:00 GMT
- Title: Foundation Model-Powered 3D Few-Shot Class Incremental Learning via Training-free Adaptor
- Authors: Sahar Ahmadi, Ali Cheraghian, Morteza Saberi, Md. Towsif Abir, Hamidreza Dastmalchi, Farookh Hussain, Shafin Rahman,
- Abstract summary: This paper introduces a new method to tackle the Few-Shot Continual Incremental Learning problem in 3D point cloud environments.
We leverage a foundational 3D model trained extensively on point cloud data.
Our approach uses a dual cache system: first, it uses previous test samples based on how confident the model was in its predictions to prevent forgetting, and second, it includes a small number of new task samples to prevent overfitting.
- Score: 9.54964908165465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning for processing point clouds hold increased interest in Few-Shot Class Incremental Learning (FSCIL) for 3D computer vision. This paper introduces a new method to tackle the Few-Shot Continual Incremental Learning (FSCIL) problem in 3D point cloud environments. We leverage a foundational 3D model trained extensively on point cloud data. Drawing from recent improvements in foundation models, known for their ability to work well across different tasks, we propose a novel strategy that does not require additional training to adapt to new tasks. Our approach uses a dual cache system: first, it uses previous test samples based on how confident the model was in its predictions to prevent forgetting, and second, it includes a small number of new task samples to prevent overfitting. This dynamic adaptation ensures strong performance across different learning tasks without needing lots of fine-tuning. We tested our approach on datasets like ModelNet, ShapeNet, ScanObjectNN, and CO3D, showing that it outperforms other FSCIL methods and demonstrating its effectiveness and versatility. The code is available at \url{https://github.com/ahmadisahar/ACCV_FCIL3D}.
Related papers
- iNeMo: Incremental Neural Mesh Models for Robust Class-Incremental Learning [22.14627083675405]
We propose incremental neural mesh models that can be extended with new meshes over time.
We demonstrate the effectiveness of our method through extensive experiments on the Pascal3D and ObjectNet3D datasets.
Our work also presents the first incremental learning approach for pose estimation.
arXiv Detail & Related papers (2024-07-12T13:57:49Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with
Pre-trained Vision-Language Models [62.663113296987085]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.
We introduce two novel components: the Redundant Feature Eliminator (RFE) and the Spatial Noise Compensator (SNC)
Considering the imbalance in existing 3D datasets, we also propose new evaluation metrics that offer a more nuanced assessment of a 3D FSCIL model.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - Leveraging Large-Scale Pretrained Vision Foundation Models for
Label-Efficient 3D Point Cloud Segmentation [67.07112533415116]
We present a novel framework that adapts various foundational models for the 3D point cloud segmentation task.
Our approach involves making initial predictions of 2D semantic masks using different large vision models.
To generate robust 3D semantic pseudo labels, we introduce a semantic label fusion strategy that effectively combines all the results via voting.
arXiv Detail & Related papers (2023-11-03T15:41:15Z) - Conditional Online Learning for Keyword Spotting [0.0]
This work investigates a simple but effective online continual learning method that updates a keyword spotter on-device via SGD as new data becomes available.
Experiments demonstrate that, compared to a naive online learning implementation, conditional model updates based on its performance in a small hold-out set drawn from the training distribution mitigate catastrophic forgetting.
arXiv Detail & Related papers (2023-05-19T15:46:31Z) - What Makes for Effective Few-shot Point Cloud Classification? [18.62689395276194]
We show that 3D few-shot learning is more challenging with unordered structures, high intra-class variances, and subtle inter-class differences.
We propose a novel plug-and-play component called Cross-Instance Adaptation (CIA) module, to address the high intra-class variances and subtle inter-class differences issues.
arXiv Detail & Related papers (2023-03-31T15:55:06Z) - Boosting Low-Data Instance Segmentation by Unsupervised Pre-training
with Saliency Prompt [103.58323875748427]
This work offers a novel unsupervised pre-training solution for low-data regimes.
Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models.
Experimental results show that our method significantly boosts several QEIS models on three datasets.
arXiv Detail & Related papers (2023-02-02T15:49:03Z) - P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with
Point-to-Pixel Prompting [94.11915008006483]
We propose a novel Point-to-Pixel prompting for point cloud analysis.
Our method attains 89.3% accuracy on the hardest setting of ScanObjectNN.
Our framework also exhibits very competitive performance on ModelNet classification and ShapeNet Part Code.
arXiv Detail & Related papers (2022-08-04T17:59:03Z) - Static-Dynamic Co-Teaching for Class-Incremental 3D Object Detection [71.18882803642526]
Deep learning approaches have shown remarkable performance in the 3D object detection task.
They suffer from a catastrophic performance drop when incrementally learning new classes without revisiting the old data.
This "catastrophic forgetting" phenomenon impedes the deployment of 3D object detection approaches in real-world scenarios.
We present the first solution - SDCoT, a novel static-dynamic co-teaching method.
arXiv Detail & Related papers (2021-12-14T09:03:41Z) - Point Transformer for Shape Classification and Retrieval of 3D and ALS
Roof PointClouds [3.3744638598036123]
This paper proposes a fully attentional model - em Point Transformer, for deriving a rich point cloud representation.
The model's shape classification and retrieval performance are evaluated on a large-scale urban dataset - RoofN3D and a standard benchmark dataset ModelNet40.
The proposed method outperforms other state-of-the-art models in the RoofN3D dataset, gives competitive results in the ModelNet40 benchmark, and showcases high robustness to various unseen point corruptions.
arXiv Detail & Related papers (2020-11-08T08:11:02Z) - 2nd Place Scheme on Action Recognition Track of ECCV 2020 VIPriors
Challenges: An Efficient Optical Flow Stream Guided Framework [57.847010327319964]
We propose a data-efficient framework that can train the model from scratch on small datasets.
Specifically, by introducing a 3D central difference convolution operation, we proposed a novel C3D neural network-based two-stream framework.
It is proved that our method can achieve a promising result even without a pre-trained model on large scale datasets.
arXiv Detail & Related papers (2020-08-10T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.