Improving the Intra-class Long-tail in 3D Detection via Rare Example
Mining
- URL: http://arxiv.org/abs/2210.08375v1
- Date: Sat, 15 Oct 2022 20:52:07 GMT
- Title: Improving the Intra-class Long-tail in 3D Detection via Rare Example
Mining
- Authors: Chiyu Max Jiang, Mahyar Najibi, Charles R. Qi, Yin Zhou, Dragomir
Anguelov
- Abstract summary: Even the best performing models suffer from the most naive mistakes when it comes to rare examples.
We show that rareness is the key to data-centric improvements for 3D detectors, since rareness is the result of a lack in data support.
We propose a general and effective method to identify the rareness of objects based on density estimation in the feature space.
- Score: 29.699694480757472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continued improvements in deep learning architectures have steadily advanced
the overall performance of 3D object detectors to levels on par with humans for
certain tasks and datasets, where the overall performance is mostly driven by
common examples. However, even the best performing models suffer from the most
naive mistakes when it comes to rare examples that do not appear frequently in
the training data, such as vehicles with irregular geometries. Most studies in
the long-tail literature focus on class-imbalanced classification problems with
known imbalanced label counts per class, but they are not directly applicable
to the intra-class long-tail examples in problems with large intra-class
variations such as 3D object detection, where instances with the same class
label can have drastically varied properties such as shapes and sizes. Other
works propose to mitigate this problem using active learning based on the
criteria of uncertainty, difficulty, or diversity. In this study, we identify a
new conceptual dimension - rareness - to mine new data for improving the
long-tail performance of models. We show that rareness, as opposed to
difficulty, is the key to data-centric improvements for 3D detectors, since
rareness is the result of a lack in data support while difficulty is related to
the fundamental ambiguity in the problem. We propose a general and effective
method to identify the rareness of objects based on density estimation in the
feature space using flow models, and propose a principled cost-aware
formulation for mining rare object tracks, which improves overall model
performance, but more importantly - significantly improves the performance for
rare objects (by 30.97\%
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - Anomaly Detection via Multi-Scale Contrasted Memory [3.0170109896527086]
We introduce a new two-stage anomaly detector which memorizes during training multi-scale normal prototypes to compute an anomaly deviation score.
Our model highly improves the state-of-the-art performance on a wide range of object, style and local anomalies with up to 35% error relative improvement on CIFAR-10.
arXiv Detail & Related papers (2022-11-16T16:58:04Z) - Towards Long-Tailed 3D Detection [56.82185415482943]
We study the problem of Long-Tailed 3D Detection (LT3D), which evaluates on all classes, including those in-the-tail.
Our modifications improve accuracy by 5% AP on average for all classes, and dramatically improve AP for rare classes.
arXiv Detail & Related papers (2022-11-16T06:00:47Z) - Few-shot Deep Representation Learning based on Information Bottleneck
Principle [0.0]
In a standard anomaly detection problem, a detection model is trained in an unsupervised setting, under an assumption that the samples were generated from a single source of normal data.
In practice, normal data often consist of multiple classes. In such settings, learning to differentiate between normal instances and anomalies among discrepancies between normal classes without large-scale labeled data presents a significant challenge.
In this work, we attempt to overcome this challenge by preparing few examples from each normal class, which is not excessively costly.
arXiv Detail & Related papers (2021-11-25T07:15:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.