On the Tightness of Semidefinite Relaxations for Rotation Estimation
- URL: http://arxiv.org/abs/2101.02099v1
- Date: Wed, 6 Jan 2021 15:42:02 GMT
- Title: On the Tightness of Semidefinite Relaxations for Rotation Estimation
- Authors: Lucas Brynte, Viktor Larsson, Jos\'e Pedro Iglesias, Carl Olsson,
Fredrik Kahl
- Abstract summary: We show that semidefinite relaxations have been successful in numerous applications in computer vision and robotics.
A general framework based on tools from algebraic geometry is introduced for analyzing semidefinite relaxations.
We show that for some problem, an appropriate pararization guarantees tight relaxations.
- Score: 49.49997461835141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Why is it that semidefinite relaxations have been so successful in numerous
applications in computer vision and robotics for solving non-convex
optimization problems involving rotations? In studying the empirical
performance, we note that there are hardly any failure cases reported in the
literature, motivating us to approach these problems from a theoretical
perspective.
A general framework based on tools from algebraic geometry is introduced for
analyzing the power of semidefinite relaxations of problems with quadratic
objective functions and rotational constraints. Applications include
registration, hand-eye calibration, camera resectioning and rotation averaging.
We characterize the extreme points, and show that there are plenty of failure
cases for which the relaxation is not tight, even in the case of a single
rotation. We also show that for some problem classes, an appropriate rotation
parametrization guarantees tight relaxations. Our theoretical findings are
accompanied with numerical simulations, providing further evidence and
understanding of the results.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Gradient descent provably escapes saddle points in the training of shallow ReLU networks [6.458742319938318]
We prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements.
Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks, we show that gradient descents most saddle points.
arXiv Detail & Related papers (2022-08-03T14:08:52Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Learning a Single Neuron with Bias Using Gradient Descent [53.15475693468925]
We study the fundamental problem of learning a single neuron with a bias term.
We show that this is a significantly different and more challenging problem than the bias-less case.
arXiv Detail & Related papers (2021-06-02T12:09:55Z) - On the Robustness of Multi-View Rotation Averaging [77.09542018140823]
We introduce the $epsilon$-cycle consistency term into the solver.
We implicitly constrain the negative effect of erroneous measurements by weight reducing.
Experiment results demonstrate that our proposed approach outperforms state of the arts on various benchmarks.
arXiv Detail & Related papers (2021-02-09T05:47:37Z) - From Symmetry to Geometry: Tractable Nonconvex Problems [20.051126124841076]
We discuss the role of curvature in the landscape and the different roles of symmetries.
This is rich with observed phenomena open problems; we close by directions for future research.
arXiv Detail & Related papers (2020-07-14T01:19:15Z) - Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth
Non-Convex Optimization [25.680334940504405]
This paper establishes the convergence of the rate of a non-smooth subient method with momentum for constrained problems.
For problems, we show how the unconstrained case can be analyzed under weaker assumptions than the state-of-the-art.
arXiv Detail & Related papers (2020-02-13T12:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.