Robust Outlier Rejection for 3D Registration with Variational Bayes
- URL: http://arxiv.org/abs/2304.01514v1
- Date: Tue, 4 Apr 2023 03:48:56 GMT
- Title: Robust Outlier Rejection for 3D Registration with Variational Bayes
- Authors: Haobo Jiang, Zheng Dang, Zhen Wei, Jin Xie, Jian Yang, Mathieu
Salzmann
- Abstract summary: We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
- Score: 70.98659381852787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based outlier (mismatched correspondence) rejection for robust 3D
registration generally formulates the outlier removal as an inlier/outlier
classification problem. The core for this to be successful is to learn the
discriminative inlier/outlier feature representations. In this paper, we
develop a novel variational non-local network-based outlier rejection framework
for robust alignment. By reformulating the non-local feature learning with
variational Bayesian inference, the Bayesian-driven long-range dependencies can
be modeled to aggregate discriminative geometric context information for
inlier/outlier distinction. Specifically, to achieve such Bayesian-driven
contextual dependencies, each query/key/value component in our non-local
network predicts a prior feature distribution and a posterior one. Embedded
with the inlier/outlier label, the posterior feature distribution is
label-dependent and discriminative. Thus, pushing the prior to be close to the
discriminative posterior in the training step enables the features sampled from
this prior at test time to model high-quality long-range dependencies. Notably,
to achieve effective posterior feature guidance, a specific probabilistic
graphical model is designed over our non-local model, which lets us derive a
variational low bound as our optimization objective for model training.
Finally, we propose a voting-based inlier searching strategy to cluster the
high-quality hypothetical inliers for transformation estimation. Extensive
experiments on 3DMatch, 3DLoMatch, and KITTI datasets verify the effectiveness
of our method.
Related papers
- A Likelihood Ratio-Based Approach to Segmenting Unknown Objects [4.000869978312742]
Outlier supervision is a widely used strategy for improving OoD detection of the existing segmentation networks.
We propose an adaptive, lightweight unknown estimation module (UEM) for outlier supervision.
Our approach achieves a new state-of-the-art across multiple datasets, outperforming the previous best method by 5.74% average precision points.
arXiv Detail & Related papers (2024-09-10T11:10:32Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Beyond the Known: Adversarial Autoencoders in Novelty Detection [2.7486022583843233]
In novelty detection, the goal is to decide if a new data point should be categorized as an inlier or an outlier.
We use a similar framework but with a lightweight deep network, and we adopt a probabilistic score with reconstruction error.
Our results indicate that our approach is effective at learning the target class, and it outperforms recent state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2024-04-06T00:04:19Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Leveraging Uncertainty Estimates To Improve Classifier Performance [4.4951754159063295]
Binary classification involves predicting the label of an instance based on whether the model score for the positive class exceeds a threshold chosen based on the application requirements.
However, model scores are often not aligned with the true positivity rate.
This is especially true when the training involves a differential sampling across classes or there is distributional drift between train and test settings.
arXiv Detail & Related papers (2023-11-20T12:40:25Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Homophily Outlier Detection in Non-IID Categorical Data [43.51919113927003]
This work introduces a novel outlier detection framework and its two instances to identify outliers in categorical data.
It first defines and incorporates distribution-sensitive outlier factors and their interdependence into a value-value graph-based representation.
The learned value outlierness allows for either direct outlier detection or outlying feature selection.
arXiv Detail & Related papers (2021-03-21T23:29:33Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.