Solving engineering eigenvalue problems with neural networks using the Rayleigh quotient
- URL: http://arxiv.org/abs/2506.04375v1
- Date: Wed, 04 Jun 2025 18:45:27 GMT
- Title: Solving engineering eigenvalue problems with neural networks using the Rayleigh quotient
- Authors: Conor Rowan, John Evans, Kurt Maute, Alireza Doostan,
- Abstract summary: We show that a neural network discretization of the eigenfunction offers unique advantages for handling continuous eigenvalue problems.<n>We also discuss the utility of harmonic functions as a spectral basis for approximating solutions to partial differential equations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From characterizing the speed of a thermal system's response to computing natural modes of vibration, eigenvalue analysis is ubiquitous in engineering. In spite of this, eigenvalue problems have received relatively little treatment compared to standard forward and inverse problems in the physics-informed machine learning literature. In particular, neural network discretizations of solutions to eigenvalue problems have seen only a handful of studies. Owing to their nonlinearity, neural network discretizations prevent the conversion of the continuous eigenvalue differential equation into a standard discrete eigenvalue problem. In this setting, eigenvalue analysis requires more specialized techniques. Using a neural network discretization of the eigenfunction, we show that a variational form of the eigenvalue problem called the "Rayleigh quotient" in tandem with a Gram-Schmidt orthogonalization procedure is a particularly simple and robust approach to find the eigenvalues and their corresponding eigenfunctions. This method is shown to be useful for finding sets of harmonic functions on irregular domains, parametric and nonlinear eigenproblems, and high-dimensional eigenanalysis. We also discuss the utility of harmonic functions as a spectral basis for approximating solutions to partial differential equations. Through various examples from engineering mechanics, the combination of the Rayleigh quotient objective, Gram-Schmidt procedure, and the neural network discretization of the eigenfunction is shown to offer unique advantages for handling continuous eigenvalue problems.
Related papers
- Annealing-based approach to solving partial differential equations [0.0]
Discretizing a PDE yields a system of linear equations.
A general eigenvalue problem can be transformed into an optimization problem.
The proposed algorithm requires iterative computations.
arXiv Detail & Related papers (2024-06-25T08:30:00Z) - Application of machine learning regression models to inverse eigenvalue
problems [0.0]
We study the numerical solution of inverse eigenvalue problems from a machine learning perspective.
Two different problems are considered: the inverse Strum-Liouville eigenvalue problem for symmetric potentials and the inverse transmission eigenvalue problem for spherically symmetric refractive indices.
arXiv Detail & Related papers (2022-12-08T14:15:01Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Physics-Informed Neural Networks for Quantum Eigenvalue Problems [1.2891210250935146]
Eigenvalue problems are critical to several fields of science and engineering.
We use unsupervised neural networks for discovering eigenfunctions and eigenvalues for differential eigenvalue problems.
The network optimization is data-free and depends solely on the predictions of the neural network.
arXiv Detail & Related papers (2022-02-24T18:29:39Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Gross misinterpretation of a conditionally solvable eigenvalue equation [0.0]
We solve an eigenvalue equation that appears in several papers about a wide range of physical problems.
We compare the resulting eigenvalues with those provided by the truncation condition.
In this way we prove that those physical predictions are merely artifacts of the truncation condition.
arXiv Detail & Related papers (2020-11-12T15:08:11Z) - Unsupervised Neural Networks for Quantum Eigenvalue Problems [1.2891210250935146]
We present a novel unsupervised neural network for discovering eigenfunctions and eigenvalues for differential eigenvalue problems.
A scanning mechanism is embedded allowing the method to find an arbitrary number of solutions.
arXiv Detail & Related papers (2020-10-10T19:34:37Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z) - Solving high-dimensional eigenvalue problems using deep neural networks:
A diffusion Monte Carlo like approach [14.558626910178127]
The eigenvalue problem is reformulated as a fixed point problem of the semigroup flow induced by the operator.
The method shares a similar spirit with diffusion Monte Carlo but augments a direct approximation to the eigenfunction through neural-network ansatz.
Our approach is able to provide accurate eigenvalue and eigenfunction approximations in several numerical examples.
arXiv Detail & Related papers (2020-02-07T03:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.