Online Stability Improvement of Groebner Basis Solvers using Deep
Learning
- URL: http://arxiv.org/abs/2401.09328v1
- Date: Wed, 17 Jan 2024 16:51:28 GMT
- Title: Online Stability Improvement of Groebner Basis Solvers using Deep
Learning
- Authors: Wanting Xu, Lan Hu, Manolis C. Tsakiris and Laurent Kneip
- Abstract summary: We show that different variable or monomial orderings lead to different elimination templates.
We then prove that the original set of coefficients may contain sufficient information to train a selection of a good solver.
- Score: 29.805408457645886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, the Gr\"obner basis theory and automatic solver
generation have lead to a large number of solutions to geometric vision
problems. In practically all cases, the derived solvers apply a fixed
elimination template to calculate the Gr\"obner basis and thereby identify the
zero-dimensional variety of the original polynomial constraints. However, it is
clear that different variable or monomial orderings lead to different
elimination templates, and we show that they may present a large variability in
accuracy for a certain instance of a problem. The present paper has two
contributions. We first show that for a common class of problems in geometric
vision, variable reordering simply translates into a permutation of the columns
of the initial coefficient matrix, and that -- as a result -- one and the same
elimination template can be reused in different ways, each one leading to
potentially different accuracy. We then prove that the original set of
coefficients may contain sufficient information to train a classifier for
online selection of a good solver, most notably at the cost of only a small
computational overhead. We demonstrate wide applicability at the hand of
generic dense polynomial problem solvers, as well as a concrete solver from
geometric vision.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Covering Number of Real Algebraic Varieties and Beyond: Improved Bounds and Applications [8.438718130535296]
We prove upper bounds on the covering number of sets in Euclidean space.
We show that bounds improve the best known general bound by Yomdin-Comte.
We illustrate the power of the result on three computational applications.
arXiv Detail & Related papers (2023-11-09T03:06:59Z) - Automatic Solver Generator for Systems of Laurent Polynomial Equations [1.7188280334580197]
Given a family of triangulation (Laurent) systems with the same monomial structure but varying coefficients, find a solver that computes solutions for any family as fast as possible.
We propose an automatic solver generator for systems of Laurent equations.
arXiv Detail & Related papers (2023-07-01T12:12:52Z) - Sparse resultant based minimal solvers in computer vision and their
connection with the action matrix [17.31412310131552]
We show that for some camera geometry problems our extra-based method leads to smaller and more stable solvers than the state-of-the-art Grobner basis-based solvers.
It provides a competitive alternative to popularner basis-based methods for minimal problems in computer vision.
arXiv Detail & Related papers (2023-01-16T14:25:19Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Matrix-wise $\ell_0$-constrained Sparse Nonnegative Least Squares [22.679160149512377]
Nonnegative least squares problems with multiple right-hand sides (MNNLS) arise in models that rely on additive linear combinations.
We introduce a novel formulation for sparse MNNLS, with a matrix-wise sparsity constraint.
We show that our proposed two-step approach provides more accurate results than state-of-the-art sparse codings applied both column-wise and globally.
arXiv Detail & Related papers (2020-11-22T17:21:16Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Computing stable resultant-based minimal solvers by hiding a variable [20.402488757146692]
Computer vision applications require robust estimation of camera geometry from a minimal number of input data measurements.
In this paper, we study an interesting alternative sparse-based method for solving sparse systems of equations by hiding one variable.
We show that for the studied problems, our resultant based approach leads to more stable solvers than the state-of-the-art Gr"obner bases-based solvers.
arXiv Detail & Related papers (2020-07-17T07:40:10Z) - Competitive Mirror Descent [67.31015611281225]
Constrained competitive optimization involves multiple agents trying to minimize conflicting objectives, subject to constraints.
We propose competitive mirror descent (CMD): a general method for solving such problems based on first order information.
As a special case we obtain a novel competitive multiplicative weights algorithm for problems on the positive cone.
arXiv Detail & Related papers (2020-06-17T22:11:35Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.