On the Distribution of Minima in Intrinsic-Metric Rotation Averaging
- URL: http://arxiv.org/abs/2003.08310v1
- Date: Wed, 18 Mar 2020 16:03:22 GMT
- Title: On the Distribution of Minima in Intrinsic-Metric Rotation Averaging
- Authors: Kyle Wilson and David Bindel
- Abstract summary: This paper studies the spatial distribution of local minima in a Rotation Averaging problem.
By recognizing the quotient manifold geometry of the problem we achieve an improvement to general $l_p$ costs.
Our results suggest using connectivity as an indicator of problem difficulty.
- Score: 14.226664427133901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rotation Averaging is a non-convex optimization problem that determines
orientations of a collection of cameras from their images of a 3D scene. The
problem has been studied using a variety of distances and robustifiers. The
intrinsic (or geodesic) distance on SO(3) is geometrically meaningful; but
while some extrinsic distance-based solvers admit (conditional) guarantees of
correctness, no comparable results have been found under the intrinsic metric.
In this paper, we study the spatial distribution of local minima. First, we
do a novel empirical study to demonstrate sharp transitions in qualitative
behavior: as problems become noisier, they transition from a single
(easy-to-find) dominant minimum to a cost surface filled with minima. In the
second part of this paper we derive a theoretical bound for when this
transition occurs. This is an extension of the results of [24], which used
local convexity as a proxy to study the difficulty of problem. By recognizing
the underlying quotient manifold geometry of the problem we achieve an n-fold
improvement over prior work. Incidentally, our analysis also extends the prior
$l_2$ work to general $l_p$ costs. Our results suggest using algebraic
connectivity as an indicator of problem difficulty.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Optimal minimax rate of learning interaction kernels [7.329333373512536]
We introduce a tamed least squares estimator (tLSE) that attains the optimal convergence rate for a broad class of exchangeable distributions.
Our findings reveal that provided the inverse problem in the large sample limit satisfies a coercivity condition, the left tail probability does not alter the bias-variance tradeoff.
arXiv Detail & Related papers (2023-11-28T15:01:58Z) - Estimation of entropy-regularized optimal transport maps between
non-compactly supported measures [15.857723276537248]
This paper addresses the problem of estimating entropy-regularized optimal transport maps with squared-Euclidean cost between source and target measures that are subGaussian.
arXiv Detail & Related papers (2023-11-20T17:18:21Z) - Extragradient Type Methods for Riemannian Variational Inequality Problems [25.574847201669144]
We show that the average-rate convergence of both REG and RPEG is $Oleft(frac1Tright)$, aligning with in the 2020ean case.
Results are enabled by addressing judiciously the holonomy effect so additional observations can be reduced.
arXiv Detail & Related papers (2023-09-25T14:08:02Z) - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Decentralized Riemannian Algorithm for Nonconvex Minimax Problems [82.50374560598493]
The minimax algorithms for neural networks have been developed to solve many problems.
In this paper, we propose two types of minimax algorithms.
For the setting, we propose DRSGDA and prove that our method achieves a gradient.
arXiv Detail & Related papers (2023-02-08T01:42:45Z) - Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems [26.676582181833584]
Minimax problems have attracted significant attention in recent years due to their widespread application in numerous machine learning models.
We developed a novel decentralized distributed gradient descent for ascent-sum minimax problem.
Our work is first one to achieve such theoretical complexities for this kind minimax problem.
arXiv Detail & Related papers (2022-12-06T03:25:44Z) - Sion's Minimax Theorem in Geodesic Metric Spaces and a Riemannian
Extragradient Algorithm [46.97925335733651]
This paper takes a step towards understanding problems that do remain tractable.
The first result is a class of geodesic space version Sion's minimax problems.
The second result is a result to geodesically completeian complexity.
arXiv Detail & Related papers (2022-02-13T15:18:07Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Instance-optimality in optimal value estimation: Adaptivity via
variance-reduced Q-learning [99.34907092347733]
We analyze the problem of estimating optimal $Q$-value functions for a discounted Markov decision process with discrete states and actions.
Using a local minimax framework, we show that this functional arises in lower bounds on the accuracy on any estimation procedure.
In the other direction, we establish the sharpness of our lower bounds, up to factors logarithmic in the state and action spaces, by analyzing a variance-reduced version of $Q$-learning.
arXiv Detail & Related papers (2021-06-28T00:38:54Z) - Deep S$^3$PR: Simultaneous Source Separation and Phase Retrieval Using
Deep Generative Models [61.508068988778476]
This paper introduces and solves the source separation and phase retrieval (S$3$PR) problem.
S$3$PR is an important but largely unsolved problem in the application domains, including microscopy, wireless$ communication, and imaging through scattering media.
arXiv Detail & Related papers (2020-02-14T03:20:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.