Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications
- URL: http://arxiv.org/abs/2007.15109v3
- Date: Fri, 2 Jul 2021 16:19:31 GMT
- Title: Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications
- Authors: Pasquale Antonante, Vasileios Tzoumas, Heng Yang, Luca Carlone
- Abstract summary: This paper introduces two unifying formulations for outlier-robust estimation, Generalized Maximum Consensus (G-MC) and Generalized Truncated Least Squares (G-TLS)
Our first contribution is a proof that outlier-robust estimation is inapproximable: in the worst case, it is impossible to (even approximately) find the set of outliers.
We propose the first minimally tuned algorithms for outlier rejection, that dynamically decide how to separate inliers from outliers.
- Score: 25.222024234900445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear estimation in robotics and vision is typically plagued with
outliers due to wrong data association, or to incorrect detections from signal
processing and machine learning methods. This paper introduces two unifying
formulations for outlier-robust estimation, Generalized Maximum Consensus
(G-MC) and Generalized Truncated Least Squares (G-TLS), and investigates
fundamental limits, practical algorithms, and applications. Our first
contribution is a proof that outlier-robust estimation is inapproximable: in
the worst case, it is impossible to (even approximately) find the set of
outliers, even with slower-than-polynomial-time algorithms (particularly,
algorithms running in quasi-polynomial time). As a second contribution, we
review and extend two general-purpose algorithms. The first, Adaptive Trimming
(ADAPT), is combinatorial, and is suitable for G-MC; the second, Graduated
Non-Convexity (GNC), is based on homotopy methods, and is suitable for G-TLS.
We extend ADAPT and GNC to the case where the user does not have prior
knowledge of the inlier-noise statistics (or the statistics may vary over time)
and is unable to guess a reasonable threshold to separate inliers from outliers
(as the one commonly used in RANSAC). We propose the first minimally tuned
algorithms for outlier rejection, that dynamically decide how to separate
inliers from outliers. Our third contribution is an evaluation of the proposed
algorithms on robot perception problems: mesh registration, image-based object
detection (shape alignment), and pose graph optimization. ADAPT and GNC execute
in real-time, are deterministic, outperform RANSAC, and are robust up to 80-90%
outliers. Their minimally tuned versions also compare favorably with the state
of the art, even though they do not rely on a noise bound for the inliers.
Related papers
- Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Outlier-Robust Geometric Perception: A Novel Thresholding-Based Estimator with Intra-Class Variance Maximization [4.3487328134753795]
We present a novel general-purpose robust estimator TIVM (Thresholding with Intra-class Variance Maximization)
It can collaborate with standard non-minimal solvers to efficiently reject outliers for geometric perception problems.
Our estimator can retain approximately the same level of robustness even when the inlier-noise statistics of the problem are fully unknown.
arXiv Detail & Related papers (2022-04-04T08:57:34Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint [30.839245814393724]
A generic method of solving moment conditions is the Generalized Method of Moments (GMM)
We develop a GMM estimator that can tolerate a constant $ell$ recovery guarantee of $O(sqrtepsilon)$.
Our algorithm and assumptions apply to instrumental variables linear and logistic regression.
arXiv Detail & Related papers (2021-10-06T21:06:22Z) - Efficient Large Scale Inlier Voting for Geometric Vision Problems [3.1231364554255636]
Outlier rejection and equivalently inlier set optimization is a key ingredient in numerous applications in computer vision.
We present an efficient and general algorithm for outlier rejection based on "intersecting" $k$-dimensional surfaces in $Rd$.
We demonstrate the versatility of the approach on several camera posing problems with a high number of matches at low inlier ratio.
arXiv Detail & Related papers (2021-07-25T14:13:07Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Revisiting Robust Model Fitting Using Truncated Loss [19.137291311347788]
New algorithms are applied to various 2D/3D registration problems.
They outperform RANSAC and approximate approximate MC methods at high outlier ratios.
New algorithms also compare favorably with state-of-the-art registration methods, especially in high noise and outliers.
arXiv Detail & Related papers (2020-08-04T14:10:41Z) - Quasi-Newton Solver for Robust Non-Rigid Registration [35.66014845211251]
We propose a formulation for robust non-rigid registration based on a globally smooth robust estimator for data fitting and regularization.
We apply the majorization-minimization algorithm to the problem, which reduces each iteration to solving a simple least-squares problem with L-BFGS.
arXiv Detail & Related papers (2020-04-09T01:45:05Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.