Learning to Solve Hard Minimal Problems
- URL: http://arxiv.org/abs/2112.03424v1
- Date: Mon, 6 Dec 2021 23:51:20 GMT
- Title: Learning to Solve Hard Minimal Problems
- Authors: Petr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla
- Abstract summary: We present an approach to solving hard geometric optimization problems in the RANSAC framework.
The hard minimal problems arise from relaxing the original geometric optimization problem into a minimal problem with many spurious solutions.
- Score: 6.117371161379209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an approach to solving hard geometric optimization problems in the
RANSAC framework. The hard minimal problems arise from relaxing the original
geometric optimization problem into a minimal problem with many spurious
solutions. Our approach avoids computing large numbers of spurious solutions.
We design a learning strategy for selecting a starting problem-solution pair
that can be numerically continued to the problem and the solution of interest.
We demonstrate our approach by developing a RANSAC solver for the problem of
computing the relative pose of three calibrated cameras, via a minimal
relaxation using four points in each view. On average, we can solve a single
problem in under 70 $\mu s.$ We also benchmark and study our engineering
choices on the very familiar problem of computing the relative pose of two
calibrated cameras, via the minimal case of five points in two views.
Related papers
- BalMCTS: Balancing Objective Function and Search Nodes in MCTS for
Constraint Optimization Problems [7.196057722218442]
Constraint Problems Optimization (COP) pose intricate challenges in problems usually addressed through Branch and Bound (B&B) methods.
We propose a novel neural network algorithm based on a depth-first search algorithm for solving COP.
Our method identifies feasible solutions with a gap of less than 17.63% within the initial 5 feasible solutions.
arXiv Detail & Related papers (2023-12-26T03:09:08Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Globally solving the Gromov-Wasserstein problem for point clouds in low
dimensional Euclidean spaces [5.534626267734822]
This paper presents a framework for computing the Gromov-Wasserstein problem between two sets of points in low dimensional spaces.
It can be used to quantify the similarity between two formations or shapes, a common problem in AI and machine learning.
arXiv Detail & Related papers (2023-07-18T08:20:56Z) - Relative pose of three calibrated and partially calibrated cameras from
four points using virtual correspondences [68.46101050286173]
We study challenging problems of estimating the relative pose of three cameras.
Our solutions are based on the simple idea of generating one or two additional virtual point correspondences in two views.
Our solvers achieve state-of-the-art results on real data.
arXiv Detail & Related papers (2023-03-28T15:50:48Z) - Learning to repeatedly solve routing problems [5.08128537391027]
We present a learned for the reoptimization of a problem after a minor change in its data.
Given the edges of an original solution, the goal is to predict and fix the ones that have a high chance of remaining in an optimal solution.
This partial prediction of the solution reduces the complexity of the problem and speeds up its resolution.
arXiv Detail & Related papers (2022-12-15T19:33:54Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Quadratic Unconstrained Binary Optimisation via Quantum-Inspired
Annealing [58.720142291102135]
We present a classical algorithm to find approximate solutions to instances of quadratic unconstrained binary optimisation.
We benchmark our approach for large scale problem instances with tuneable hardness and planted solutions.
arXiv Detail & Related papers (2021-08-18T09:26:17Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z) - Taxonomy of Dual Block-Coordinate Ascent Methods for Discrete Energy
Minimization [96.1052289276254]
We consider the maximum-a-posteriori inference problem in discrete graphical models and study solvers based on the dual block-coordinate ascent rule.
We map all existing solvers in a single framework, allowing for a better understanding of their design principles.
arXiv Detail & Related papers (2020-04-16T15:49:13Z) - The Importance of Good Starting Solutions in the Minimum Sum of Squares
Clustering Problem [0.0]
The clustering problem has many applications in Machine Learning, Operations Research, and Statistics.
We propose three algorithms to create starting solutions for improvement algorithms for this problem.
arXiv Detail & Related papers (2020-04-06T22:13:41Z) - Optimal least-squares solution to the hand-eye calibration problem [3.6525095710982916]
We propose a least-squares formulation to the noisy hand-eye calibration problem using dual-quaternions.
We introduce efficient algorithms to find the exact optimal solution, based on analytic properties of the problem, avoiding non-linear optimization.
arXiv Detail & Related papers (2020-02-25T12:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.