Neural Networks Based on Power Method and Inverse Power Method for
Solving Linear Eigenvalue Problems
- URL: http://arxiv.org/abs/2209.11134v5
- Date: Sun, 16 Jul 2023 01:46:47 GMT
- Title: Neural Networks Based on Power Method and Inverse Power Method for
Solving Linear Eigenvalue Problems
- Authors: Qihong Yang, Yangtao Deng, Yu Yang, Qiaolin He, Shiquan Zhang
- Abstract summary: We propose two kinds of neural networks inspired by power method and inverse power method to solve linear eigenvalue problems.
The eigenfunction of the eigenvalue problem is learned by the neural network.
We show that accurate eigenvalue and eigenfunction approximations can be obtained by our methods.
- Score: 4.3209899858935366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we propose two kinds of neural networks inspired by power
method and inverse power method to solve linear eigenvalue problems. These
neural networks share similar ideas with traditional methods, in which the
differential operator is realized by automatic differentiation. The
eigenfunction of the eigenvalue problem is learned by the neural network and
the iterative algorithms are implemented by optimizing the specially defined
loss function. The largest positive eigenvalue, smallest eigenvalue and
interior eigenvalues with the given prior knowledge can be solved efficiently.
We examine the applicability and accuracy of our methods in the numerical
experiments in one dimension, two dimensions and higher dimensions. Numerical
results show that accurate eigenvalue and eigenfunction approximations can be
obtained by our methods.
Related papers
- Distributed Cooperative AI for Large-Scale Eigenvalue Computations Using Neural Networks [0.0]
This paper presents a novel method for eigenvalue computation using a distributed cooperative neural network framework.
Our algorithm enables multiple autonomous agents to collaboratively estimate the smallest eigenvalue of large matrices.
arXiv Detail & Related papers (2024-09-10T09:26:55Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Physics-Informed Neural Networks for Quantum Eigenvalue Problems [1.2891210250935146]
Eigenvalue problems are critical to several fields of science and engineering.
We use unsupervised neural networks for discovering eigenfunctions and eigenvalues for differential eigenvalue problems.
The network optimization is data-free and depends solely on the predictions of the neural network.
arXiv Detail & Related papers (2022-02-24T18:29:39Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Supervised learning in Hamiltonian reconstruction from local
measurements on eigenstates [0.45880283710344055]
Reconstructing a system Hamiltonian through measurements on its eigenstates is an important inverse problem in quantum physics.
In this work, we discuss this problem in more depth and apply the supervised learning method via neural networks to solve it.
arXiv Detail & Related papers (2020-07-12T11:37:17Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z) - Solving high-dimensional eigenvalue problems using deep neural networks:
A diffusion Monte Carlo like approach [14.558626910178127]
The eigenvalue problem is reformulated as a fixed point problem of the semigroup flow induced by the operator.
The method shares a similar spirit with diffusion Monte Carlo but augments a direct approximation to the eigenfunction through neural-network ansatz.
Our approach is able to provide accurate eigenvalue and eigenfunction approximations in several numerical examples.
arXiv Detail & Related papers (2020-02-07T03:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.