Simulator HC: Regression-based Online Simulation of Starting Problem-Solution Pairs for Homotopy Continuation in Geometric Vision
- URL: http://arxiv.org/abs/2411.03745v2
- Date: Tue, 25 Mar 2025 22:51:23 GMT
- Title: Simulator HC: Regression-based Online Simulation of Starting Problem-Solution Pairs for Homotopy Continuation in Geometric Vision
- Authors: Xinyue Zhang, Zijia Dai, Wanting Xu, Laurent Kneip,
- Abstract summary: Homotopy continuation has been introduced as a plausible alternative to elimination templates.<n>We propose a novel approach to finding solution-problem pairs, where we only need to predict a rough initial solution.<n>We apply this elegant combination to generalized camera resectioning, and also introduce a new solution to the challenging generalized relative pose and scale problem.
- Score: 17.543457476766367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While automatically generated polynomial elimination templates have sparked great progress in the field of 3D computer vision, there remain many problems for which the degree of the constraints or the number of unknowns leads to intractability. In recent years, homotopy continuation has been introduced as a plausible alternative. However, the method currently depends on expensive parallel tracking of all possible solutions in the complex domain, or a classification network for starting problem-solution pairs trained over a limited set of real-world examples. Our innovation lies in a novel approach to finding solution-problem pairs, where we only need to predict a rough initial solution, with the corresponding problem generated by an online simulator. Subsequently, homotopy continuation is applied to track that single solution back to the original problem. We apply this elegant combination to generalized camera resectioning, and also introduce a new solution to the challenging generalized relative pose and scale problem. As demonstrated, the proposed method successfully compensates the raw error committed by the regressor alone, and leads to state-of-the-art efficiency and success rates.
Related papers
- The Differentiable Feasibility Pump [49.55771920271201]
This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters.
A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost.
arXiv Detail & Related papers (2024-11-05T22:26:51Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces [47.907236421762626]
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces.
We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem.
arXiv Detail & Related papers (2024-05-24T12:53:07Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Shuffled Autoregression For Motion Interpolation [53.61556200049156]
This work aims to provide a deep-learning solution for the motion task.
We propose a novel framework, referred to as emphShuffled AutoRegression, which expands the autoregression to generate in arbitrary (shuffled) order.
We also propose an approach to constructing a particular kind of dependency graph, with three stages assembled into an end-to-end spatial-temporal motion Transformer.
arXiv Detail & Related papers (2023-06-10T07:14:59Z) - A Novel Plug-and-Play Approach for Adversarially Robust Generalization [38.72514422694518]
We propose a robust framework that employs adversarially robust training to safeguard the ML models against perturbed testing data.
Our contributions can be seen from both computational and statistical perspectives.
arXiv Detail & Related papers (2022-08-19T17:02:55Z) - Exploring the solution space of linear inverse problems with GAN latent
geometry [23.779985842891705]
Inverse problems consist in reconstructing signals from incomplete sets of measurements.
We propose a method to generate multiple reconstructions that fit both the measurements and a data-driven prior learned by a generative adversarial network.
arXiv Detail & Related papers (2022-07-01T14:33:44Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Learning Iterative Robust Transformation Synchronization [71.73273007900717]
We propose to use graph neural networks (GNNs) to learn transformation synchronization.
In this work, we avoid handcrafting robust loss functions, and propose to use graph neural networks (GNNs) to learn transformation synchronization.
arXiv Detail & Related papers (2021-11-01T07:03:14Z) - Physics-informed neural network simulation of multiphase poroelasticity
using stress-split sequential training [0.0]
We present a framework for solving problems governed by partial differential equations (PDEs) based on elastic networks.
We find that the approach converges to solve problems of porosci, Barry-Scier's injection-production problem, and a two-phase drainage problem.
arXiv Detail & Related papers (2021-10-06T20:09:09Z) - The Neural Network shifted-Proper Orthogonal Decomposition: a Machine
Learning Approach for Non-linear Reduction of Hyperbolic Equations [0.0]
In this work we approach the problem of automatically detecting the correct pre-processing transformation in a statistical learning framework.
The purely data-driven method allowed us to generalise the existing approaches of linear subspace manipulation to non-linear hyperbolic problems with unknown advection fields.
The proposed algorithm has been validated against simple test cases to benchmark its performances and later successfully applied to a multiphase simulation.
arXiv Detail & Related papers (2021-08-14T15:13:35Z) - Residual Gaussian Process: A Tractable Nonparametric Bayesian Emulator
for Multi-fidelity Simulations [6.6903363553912305]
A novel additive structure is introduced in which the highest fidelity solution is written as a sum of the lowest fidelity solution and residuals.
The resulting model is equipped with a closed-form solution for the predictive posterior.
It is shown how active learning can be used to enhance the model, especially with a limited computational budget.
arXiv Detail & Related papers (2021-04-08T12:57:46Z) - An efficient nonconvex reformulation of stagewise convex optimization
problems [21.61123565780424]
We develop a non reformulation designed to exploit staged structure problems.
For neural network verification approach obtains spurious local gaps in only a few steps.
It can quickly solve problems faster than both offthe-shelf and specialized solvers.
arXiv Detail & Related papers (2020-10-27T14:30:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Stochastic gradient algorithms from ODE splitting perspective [0.0]
We present a different view on optimization, which goes back to the splitting schemes for approximate solutions of ODE.
In this work, we provide a connection between descent approach and gradient first-order splitting scheme for ODE.
We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it.
arXiv Detail & Related papers (2020-04-19T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.