Application of machine learning regression models to inverse eigenvalue
problems
- URL: http://arxiv.org/abs/2212.04279v1
- Date: Thu, 8 Dec 2022 14:15:01 GMT
- Title: Application of machine learning regression models to inverse eigenvalue
problems
- Authors: Nikolaos Pallikarakis and Andreas Ntargaras
- Abstract summary: We study the numerical solution of inverse eigenvalue problems from a machine learning perspective.
Two different problems are considered: the inverse Strum-Liouville eigenvalue problem for symmetric potentials and the inverse transmission eigenvalue problem for spherically symmetric refractive indices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we study the numerical solution of inverse eigenvalue problems
from a machine learning perspective. Two different problems are considered: the
inverse Strum-Liouville eigenvalue problem for symmetric potentials and the
inverse transmission eigenvalue problem for spherically symmetric refractive
indices. Firstly, we solve the corresponding direct problems to produce the
required eigenvalues datasets in order to train the machine learning
algorithms. Next, we consider several examples of inverse problems and compare
the performance of each model to predict the unknown potentials and refractive
indices respectively, from a given small set of the lowest eigenvalues. The
supervised regression models we use are k-Nearest Neighbours, Random Forests
and Multi-Layer Perceptron. Our experiments show that these machine learning
methods, under appropriate tuning on their parameters, can numerically solve
the examined inverse eigenvalue problems.
Related papers
- Bayesian Model Parameter Learning in Linear Inverse Problems with Application in EEG Focal Source Imaging [49.1574468325115]
Inverse problems can be described as limited-data problems in which the signal of interest cannot be observed directly.
We studied a linear inverse problem that included an unknown non-linear model parameter.
We utilized a Bayesian model-based learning approach that allowed signal recovery and subsequently estimation of the model parameter.
arXiv Detail & Related papers (2025-01-07T18:14:24Z) - TAEN: A Model-Constrained Tikhonov Autoencoder Network for Forward and Inverse Problems [0.6144680854063939]
Real-time solvers for forward and inverse problems are essential in engineering and science applications.
Machine learning surrogate models have emerged as promising alternatives to traditional methods, offering substantially reduced computational time.
These models typically demand extensive training datasets to achieve robust generalization across diverse scenarios.
We propose a novel Tikhonov autoencoder model-constrained framework, called TAE, capable of learning both forward and inverse surrogate models using a single arbitrary observation sample.
arXiv Detail & Related papers (2024-12-09T21:36:42Z) - A Guide to Stochastic Optimisation for Large-Scale Inverse Problems [4.926711494319977]
optimisation algorithms are the de facto standard for machine learning with large amounts of data.
Handling only a subset of available data in each optimisation step dramatically reduces the per-iteration computational costs.
We focus on the potential and the challenges for optimisation that are unique to variational regularisation for inverse imaging problems.
arXiv Detail & Related papers (2024-06-10T15:02:30Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Machine Learning for Initial Value Problems of Parameter-Dependent
Dynamical Systems [0.0]
We consider initial value problems of nonlinear dynamical systems, which include physical parameters.
We examine the mapping from the set of parameters to the discrete values of the trajectories.
We employ feedforward neural networks, which are fitted to data from samples of the trajectories.
arXiv Detail & Related papers (2021-01-12T16:50:58Z) - Consistency analysis of bilevel data-driven learning in inverse problems [1.0705399532413618]
We consider the adaptive learning of the regularization parameter from data by means of optimization.
We demonstrate how to implement our framework on linear inverse problems.
Online numerical schemes are derived using the gradient descent method.
arXiv Detail & Related papers (2020-07-06T12:23:29Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.