Sparse resultant based minimal solvers in computer vision and their
connection with the action matrix
- URL: http://arxiv.org/abs/2301.06443v2
- Date: Fri, 1 Sep 2023 12:38:13 GMT
- Title: Sparse resultant based minimal solvers in computer vision and their
connection with the action matrix
- Authors: Snehal Bhayani, Janne Heikkil\"a, Zuzana Kukelova
- Abstract summary: We show that for some camera geometry problems our extra-based method leads to smaller and more stable solvers than the state-of-the-art Grobner basis-based solvers.
It provides a competitive alternative to popularner basis-based methods for minimal problems in computer vision.
- Score: 17.31412310131552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many computer vision applications require robust and efficient estimation of
camera geometry from a minimal number of input data measurements, i.e., solving
minimal problems in a RANSAC framework. Minimal problems are usually formulated
as complex systems of sparse polynomials. The systems usually are
overdetermined and consist of polynomials with algebraically constrained
coefficients. Most state-of-the-art efficient polynomial solvers are based on
the action matrix method that has been automated and highly optimized in recent
years. On the other hand, the alternative theory of sparse resultants and
Newton polytopes has been less successful for generating efficient solvers,
primarily because the polytopes do not respect the constraints on the
coefficients. Therefore, in this paper, we propose a simple iterative scheme to
test various subsets of the Newton polytopes and search for the most efficient
solver. Moreover, we propose to use an extra polynomial with a special form to
further improve the solver efficiency via a Schur complement computation. We
show that for some camera geometry problems our extra polynomial-based method
leads to smaller and more stable solvers than the state-of-the-art Grobner
basis-based solvers. The proposed method can be fully automated and
incorporated into existing tools for automatic generation of efficient
polynomial solvers. It provides a competitive alternative to popular Grobner
basis-based methods for minimal problems in computer vision. We also study the
conditions under which the minimal solvers generated by the state-of-the-art
action matrix-based methods and the proposed extra polynomial resultant-based
method, are equivalent. Specifically we consider a step-by-step comparison
between the approaches based on the action matrix and the sparse resultant,
followed by a set of substitutions, which would lead to equivalent minimal
solvers.
Related papers
- A practical, fast method for solving sum-of-squares problems for very large polynomials [10.645318208507213]
Sum of squares (SOS) optimization is a powerful technique for solving problems where the positivity of as must be enforced.
Our goal is to devise an approach that can handle larger, more complex problems than is currently possible.
arXiv Detail & Related papers (2024-10-21T12:47:42Z) - Automatic Solver Generator for Systems of Laurent Polynomial Equations [1.7188280334580197]
Given a family of triangulation (Laurent) systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family as fast as possible.
We propose an automatic solver generator for systems of Laurent equations.
arXiv Detail & Related papers (2023-07-01T12:12:52Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Sparse Polynomial Optimization: Theory and Practice [5.27013884159732]
Book presents several efforts to tackle this challenge with important scientific implications.
It provides alternative optimization schemes that scale well in terms of computational complexity.
We present sparsity-exploiting hierarchies of relaxations, for either unconstrained or constrained problems.
arXiv Detail & Related papers (2022-08-23T18:56:05Z) - Weighted Low Rank Matrix Approximation and Acceleration [0.5177947445379687]
Low-rank matrix approximation is one of the central concepts in machine learning.
Low-rank matrix completion (LRMC) solves the LRMA problem when some observations are missing.
We propose an algorithm for solving the weighted problem, as well as two acceleration techniques.
arXiv Detail & Related papers (2021-09-22T22:03:48Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Computing stable resultant-based minimal solvers by hiding a variable [20.402488757146692]
Computer vision applications require robust estimation of camera geometry from a minimal number of input data measurements.
In this paper, we study an interesting alternative sparse-based method for solving sparse systems of equations by hiding one variable.
We show that for the studied problems, our resultant based approach leads to more stable solvers than the state-of-the-art Gr"obner bases-based solvers.
arXiv Detail & Related papers (2020-07-17T07:40:10Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.