I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting
- URL: http://arxiv.org/abs/2012.09014v1
- Date: Wed, 16 Dec 2020 15:17:51 GMT
- Title: I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting
- Authors: Jiahua Dong, Yang Cong, Gan Sun, Bingtao Ma and Lichen Wang
- Abstract summary: I3DOL is first exploration to learn new classes of 3D object continually.
An adaptive-geometric centroid module is designed to construct discriminative local geometric structures.
A geometric-aware attention mechanism is developed to quantify the contributions of local geometric structures.
- Score: 38.7610646073842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D object classification has attracted appealing attentions in academic
researches and industrial applications. However, most existing methods need to
access the training data of past 3D object classes when facing the common
real-world scenario: new classes of 3D objects arrive in a sequence. Moreover,
the performance of advanced approaches degrades dramatically for past learned
classes (i.e., catastrophic forgetting), due to the irregular and redundant
geometric structures of 3D point cloud data. To address these challenges, we
propose a new Incremental 3D Object Learning (i.e., I3DOL) model, which is the
first exploration to learn new classes of 3D object continually. Specifically,
an adaptive-geometric centroid module is designed to construct discriminative
local geometric structures, which can better characterize the irregular point
cloud representation for 3D object. Afterwards, to prevent the catastrophic
forgetting brought by redundant geometric information, a geometric-aware
attention mechanism is developed to quantify the contributions of local
geometric structures, and explore unique 3D geometric characteristics with high
contributions for classes incremental learning. Meanwhile, a score fairness
compensation strategy is proposed to further alleviate the catastrophic
forgetting caused by unbalanced data between past and new classes of 3D object,
by compensating biased prediction for new classes in the validation phase.
Experiments on 3D representative datasets validate the superiority of our I3DOL
framework.
Related papers
- GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency [50.11520458252128]
Existing 3D affordance learning methods struggle with generalization and robustness due to limited annotated data.
We propose GEAL, a novel framework designed to enhance the generalization and robustness of 3D affordance learning by leveraging large-scale pre-trained 2D models.
GEAL consistently outperforms existing methods across seen and novel object categories, as well as corrupted data.
arXiv Detail & Related papers (2024-12-12T17:59:03Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with Pre-trained Vision-Language Models [59.13757801286343]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.
We introduce the FILP-3D framework with two novel components: the Redundant Feature Eliminator (RFE) for feature space misalignment and the Spatial Noise Compensator (SNC) for significant noise.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - Self-supervised Learning for Enhancing Geometrical Modeling in 3D-Aware
Generative Adversarial Network [42.16520614686877]
3D-GANs exhibit artifacts in their 3D geometrical modeling, such as mesh imperfections and holes.
These shortcomings are primarily attributed to the limited availability of annotated 3D data.
We present a Self-Supervised Learning technique tailored as an auxiliary loss for any 3D-GAN.
arXiv Detail & Related papers (2023-12-19T04:55:33Z) - Open-Pose 3D Zero-Shot Learning: Benchmark and Challenges [23.663199578392447]
We propose a more realistic and challenging scenario named open-pose 3D zero-shot classification.
First, we revisit the current research on 3D zero-shot classification.
We propose two benchmark datasets specifically designed for the open-pose setting.
arXiv Detail & Related papers (2023-12-12T07:52:33Z) - InOR-Net: Incremental 3D Object Recognition Network for Point Cloud
Representation [51.121731449575776]
We develop a novel Incremental 3D Object Recognition Network (i.e., InOR-Net) to recognize new classes of 3D objects continuously.
Specifically, a category-guided geometric reasoning is proposed to reason local geometric structures with distinctive 3D characteristics of each class.
We then propose a novel critic-induced geometric attention mechanism to distinguish which 3D geometric characteristics within each class are beneficial to overcome the catastrophic forgetting on old classes of 3D objects.
arXiv Detail & Related papers (2023-02-20T10:30:16Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.