Continual Learning for Pose-Agnostic Object Recognition in 3D Point
Clouds
- URL: http://arxiv.org/abs/2209.04840v1
- Date: Sun, 11 Sep 2022 11:31:39 GMT
- Title: Continual Learning for Pose-Agnostic Object Recognition in 3D Point
Clouds
- Authors: Xihao Wang, Xian Wei
- Abstract summary: This work focuses on pose-agnostic continual learning tasks, where the object's pose changes dynamically and unpredictably.
We propose a novel continual learning model that effectively distillates previous tasks' geometric equivariance information.
The experiments show that our method overcomes the challenge of pose-agnostic scenarios in several mainstream point cloud datasets.
- Score: 5.521693536291449
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Continual Learning aims to learn multiple incoming new tasks continually, and
to keep the performance of learned tasks at a consistent level. However,
existing research on continual learning assumes the pose of the object is
pre-defined and well-aligned. For practical application, this work focuses on
pose-agnostic continual learning tasks, where the object's pose changes
dynamically and unpredictably. The point cloud augmentation adopted from past
approaches would sharply rise with the task increment in the continual learning
process. To address this problem, we inject the equivariance as the additional
prior knowledge into the networks. We proposed a novel continual learning model
that effectively distillates previous tasks' geometric equivariance
information. The experiments show that our method overcomes the challenge of
pose-agnostic scenarios in several mainstream point cloud datasets. We further
conduct ablation studies to evaluate the validation of each component of our
approach.
Related papers
- Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - On the Convergence of Continual Learning with Adaptive Methods [4.351356718501137]
We propose an adaptive sequential method for non continual learning (NCCL)
We demonstrate that the proposed method improves the performance of continual learning existing methods for several image classification tasks.
arXiv Detail & Related papers (2024-04-08T14:28:27Z) - Explore In-Context Learning for 3D Point Cloud Understanding [71.20912026561484]
We introduce a novel framework, named Point-In-Context, designed especially for in-context learning in 3D point clouds.
We propose the Joint Sampling module, carefully designed to work in tandem with the general point sampling operator.
We conduct extensive experiments to validate the versatility and adaptability of our proposed methods in handling a wide range of tasks.
arXiv Detail & Related papers (2023-06-14T17:53:21Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - Continual Object Detection: A review of definitions, strategies, and
challenges [0.0]
The field of Continual Learning investigates the ability to learn consecutive tasks without losing performance on those previously learned.
We believe that research in continual object detection deserves even more attention due to its vast range of applications in robotics and autonomous vehicles.
arXiv Detail & Related papers (2022-05-30T21:57:48Z) - Learning-based Point Cloud Registration for 6D Object Pose Estimation in
the Real World [55.7340077183072]
We tackle the task of estimating the 6D pose of an object from point cloud data.
Recent learning-based approaches to addressing this task have shown great success on synthetic datasets.
We analyze the causes of these failures, which we trace back to the difference between the feature distributions of the source and target point clouds.
arXiv Detail & Related papers (2022-03-29T07:55:04Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Continual Learning with Neuron Activation Importance [1.7513645771137178]
Continual learning is a concept of online learning with multiple sequential tasks.
One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks.
We propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks.
arXiv Detail & Related papers (2021-07-27T08:09:32Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.