Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications
- URL: http://arxiv.org/abs/2007.15109v3
- Date: Fri, 2 Jul 2021 16:19:31 GMT
- Title: Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications
- Authors: Pasquale Antonante, Vasileios Tzoumas, Heng Yang, Luca Carlone
- Abstract summary: This paper introduces two unifying formulations for outlier-robust estimation, Generalized Maximum Consensus (G-MC) and Generalized Truncated Least Squares (G-TLS)
Our first contribution is a proof that outlier-robust estimation is inapproximable: in the worst case, it is impossible to (even approximately) find the set of outliers.
We propose the first minimally tuned algorithms for outlier rejection, that dynamically decide how to separate inliers from outliers.
- Score: 25.222024234900445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear estimation in robotics and vision is typically plagued with
outliers due to wrong data association, or to incorrect detections from signal
processing and machine learning methods. This paper introduces two unifying
formulations for outlier-robust estimation, Generalized Maximum Consensus
(G-MC) and Generalized Truncated Least Squares (G-TLS), and investigates
fundamental limits, practical algorithms, and applications. Our first
contribution is a proof that outlier-robust estimation is inapproximable: in
the worst case, it is impossible to (even approximately) find the set of
outliers, even with slower-than-polynomial-time algorithms (particularly,
algorithms running in quasi-polynomial time). As a second contribution, we
review and extend two general-purpose algorithms. The first, Adaptive Trimming
(ADAPT), is combinatorial, and is suitable for G-MC; the second, Graduated
Non-Convexity (GNC), is based on homotopy methods, and is suitable for G-TLS.
We extend ADAPT and GNC to the case where the user does not have prior
knowledge of the inlier-noise statistics (or the statistics may vary over time)
and is unable to guess a reasonable threshold to separate inliers from outliers
(as the one commonly used in RANSAC). We propose the first minimally tuned
algorithms for outlier rejection, that dynamically decide how to separate
inliers from outliers. Our third contribution is an evaluation of the proposed
algorithms on robot perception problems: mesh registration, image-based object
detection (shape alignment), and pose graph optimization. ADAPT and GNC execute
in real-time, are deterministic, outperform RANSAC, and are robust up to 80-90%
outliers. Their minimally tuned versions also compare favorably with the state
of the art, even though they do not rely on a noise bound for the inliers.
Related papers
- Outlier-Robust Training of Machine Learning Models [21.352210662488112]
We propose an Adaptive Alternation Algorithm for training machine learning models with outliers.
The algorithm iteratively trains the model by using a weighted version of the non-robust loss, while updating the weights at each.
Considering arbitrary outliers (i.e., with no distributional assumption on the outliers), we show that the use of robust loss kernels sigma increases the region of convergence.
arXiv Detail & Related papers (2024-12-31T04:19:53Z) - Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two principled algorithms for the test-time compute of large language models.
We prove theoretically that the failure probability of one algorithm decays to zero exponentially as its test-time compute grows.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Outlier-Robust Geometric Perception: A Novel Thresholding-Based Estimator with Intra-Class Variance Maximization [4.3487328134753795]
We present a novel general-purpose robust estimator TIVM (Thresholding with Intra-class Variance Maximization)
It can collaborate with standard non-minimal solvers to efficiently reject outliers for geometric perception problems.
Our estimator can retain approximately the same level of robustness even when the inlier-noise statistics of the problem are fully unknown.
arXiv Detail & Related papers (2022-04-04T08:57:34Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Revisiting Robust Model Fitting Using Truncated Loss [19.137291311347788]
New algorithms are applied to various 2D/3D registration problems.
They outperform RANSAC and approximate approximate MC methods at high outlier ratios.
New algorithms also compare favorably with state-of-the-art registration methods, especially in high noise and outliers.
arXiv Detail & Related papers (2020-08-04T14:10:41Z) - Quasi-Newton Solver for Robust Non-Rigid Registration [35.66014845211251]
We propose a formulation for robust non-rigid registration based on a globally smooth robust estimator for data fitting and regularization.
We apply the majorization-minimization algorithm to the problem, which reduces each iteration to solving a simple least-squares problem with L-BFGS.
arXiv Detail & Related papers (2020-04-09T01:45:05Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.