Machine learning the real discriminant locus
- URL: http://arxiv.org/abs/2006.14078v2
- Date: Mon, 8 Aug 2022 13:25:42 GMT
- Title: Machine learning the real discriminant locus
- Authors: Edgar A. Bernal, Jonathan D. Hauenstein, Dhagash Mehta, Margaret H.
Regan, Tingting Tang
- Abstract summary: This article views locating the real discriminant locus as a supervised classification problem in machine learning.
At each sample point, homotopy continuation is used to obtain the number of real solutions to the corresponding system.
One application of having learned the real discriminant locus is to develop a real homotopy method that only tracks the real solution paths.
- Score: 13.63199518246153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameterized systems of polynomial equations arise in many applications in
science and engineering with the real solutions describing, for example,
equilibria of a dynamical system, linkages satisfying design constraints, and
scene reconstruction in computer vision. Since different parameter values can
have a different number of real solutions, the parameter space is decomposed
into regions whose boundary forms the real discriminant locus. This article
views locating the real discriminant locus as a supervised classification
problem in machine learning where the goal is to determine classification
boundaries over the parameter space, with the classes being the number of real
solutions. For multidimensional parameter spaces, this article presents a novel
sampling method which carefully samples the parameter space. At each sample
point, homotopy continuation is used to obtain the number of real solutions to
the corresponding polynomial system. Machine learning techniques including
nearest neighbor and deep learning are used to efficiently approximate the real
discriminant locus. One application of having learned the real discriminant
locus is to develop a real homotopy method that only tracks the real solution
paths unlike traditional methods which track all~complex~solution~paths.
Examples show that the proposed approach can efficiently approximate
complicated solution boundaries such as those arising from the equilibria of
the Kuramoto model.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Approximation Theory, Computing, and Deep Learning on the Wasserstein Space [0.5735035463793009]
We address the challenge of approximating functions in infinite-dimensional spaces from finite samples.
Our focus is on the Wasserstein distance function, which serves as a relevant example.
We adopt three machine learning-based approaches to define functional approximants.
arXiv Detail & Related papers (2023-10-30T13:59:47Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Spectral operator learning for parametric PDEs without data reliance [6.7083321695379885]
We introduce a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing.
The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques.
arXiv Detail & Related papers (2023-10-03T12:37:15Z) - High-dimensional manifold of solutions in neural networks: insights from statistical physics [0.0]
I review the statistical mechanics approach to neural networks, focusing on the paradigmatic example of the perceptron architecture with binary an continuous weights.
I discuss some recent works that unveiled how the zero training error configurations are geometrically arranged.
arXiv Detail & Related papers (2023-09-17T11:10:25Z) - Automated differential equation solver based on the parametric
approximation optimization [77.34726150561087]
The article presents a method that uses an optimization algorithm to obtain a solution using the parameterized approximation.
It allows solving the wide class of equations in an automated manner without the algorithm's parameters change.
arXiv Detail & Related papers (2022-05-11T10:06:47Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Computational characteristics of feedforward neural networks for solving
a stiff differential equation [0.0]
We study the solution of a simple but fundamental stiff ordinary differential equation modelling a damped system.
We show that it is possible to identify preferable choices to be made for parameters and methods.
Overall we extend the current literature in the field by showing what can be done in order to obtain reliable and accurate results by the neural network approach.
arXiv Detail & Related papers (2020-12-03T12:22:24Z) - Resampling with neural networks for stochastic parameterization in
multiscale systems [0.0]
We present a machine-learning method, used for the conditional resampling of observations or reference data from a fully resolved simulation.
It is based on the probabilistic classiffcation of subsets of reference data, conditioned on macroscopic variables.
We validate our approach on the Lorenz 96 system, using two different parameter settings.
arXiv Detail & Related papers (2020-04-03T10:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.