Application of machine learning regression models to inverse eigenvalue
problems
- URL: http://arxiv.org/abs/2212.04279v1
- Date: Thu, 8 Dec 2022 14:15:01 GMT
- Title: Application of machine learning regression models to inverse eigenvalue
problems
- Authors: Nikolaos Pallikarakis and Andreas Ntargaras
- Abstract summary: We study the numerical solution of inverse eigenvalue problems from a machine learning perspective.
Two different problems are considered: the inverse Strum-Liouville eigenvalue problem for symmetric potentials and the inverse transmission eigenvalue problem for spherically symmetric refractive indices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the numerical solution of inverse eigenvalue problems
from a machine learning perspective. Two different problems are considered: the
inverse Strum-Liouville eigenvalue problem for symmetric potentials and the
inverse transmission eigenvalue problem for spherically symmetric refractive
indices. Firstly, we solve the corresponding direct problems to produce the
required eigenvalues datasets in order to train the machine learning
algorithms. Next, we consider several examples of inverse problems and compare
the performance of each model to predict the unknown potentials and refractive
indices respectively, from a given small set of the lowest eigenvalues. The
supervised regression models we use are k-Nearest Neighbours, Random Forests
and Multi-Layer Perceptron. Our experiments show that these machine learning
methods, under appropriate tuning on their parameters, can numerically solve
the examined inverse eigenvalue problems.
Related papers
- A Guide to Stochastic Optimisation for Large-Scale Inverse Problems [4.926711494319977]
optimisation algorithms are the de facto standard for machine learning with large amounts of data.
We provide a comprehensive account of the state-of-the-art in optimisation from the viewpoint of inverse problems.
We focus on the challenges for optimisation that are unique and are not commonly encountered in machine learning.
arXiv Detail & Related papers (2024-06-10T15:02:30Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - QUBO transformation using Eigenvalue Decomposition [0.5439020425819]
This paper utilizes the eigenvalue decomposition of the underlying Q matrix to alter and improve the search process.
We show significant performance improvements on problems with dominant eigenvalues.
arXiv Detail & Related papers (2021-06-19T16:58:15Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Machine Learning for Initial Value Problems of Parameter-Dependent
Dynamical Systems [0.0]
We consider initial value problems of nonlinear dynamical systems, which include physical parameters.
We examine the mapping from the set of parameters to the discrete values of the trajectories.
We employ feedforward neural networks, which are fitted to data from samples of the trajectories.
arXiv Detail & Related papers (2021-01-12T16:50:58Z) - Consistency analysis of bilevel data-driven learning in inverse problems [1.0705399532413618]
We consider the adaptive learning of the regularization parameter from data by means of optimization.
We demonstrate how to implement our framework on linear inverse problems.
Online numerical schemes are derived using the gradient descent method.
arXiv Detail & Related papers (2020-07-06T12:23:29Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.