Physics-Guided Problem Decomposition for Scaling Deep Learning of
High-dimensional Eigen-Solvers: The Case of Schr\"{o}dinger's Equation
- URL: http://arxiv.org/abs/2202.05994v2
- Date: Tue, 15 Feb 2022 15:49:21 GMT
- Title: Physics-Guided Problem Decomposition for Scaling Deep Learning of
High-dimensional Eigen-Solvers: The Case of Schr\"{o}dinger's Equation
- Authors: Sangeeta Srivastava, Samuel Olin, Viktor Podolskiy, Anuj Karpatne,
Wei-Cheng Lee, Anish Arora
- Abstract summary: Deep neural networks (NNs) have been proposed as a viable alternative to traditional simulation-driven approaches for solving high-dimensional eigenvalue equations.
In this paper, we use physics knowledge to decompose the complex regression task of predicting the high-dimensional eigenvectors into simpler sub-tasks.
We demonstrate the efficacy of such physics-guided problem decomposition for the case of the Schr"odinger's Equation in Quantum Mechanics.
- Score: 8.80823317679047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given their ability to effectively learn non-linear mappings and perform fast
inference, deep neural networks (NNs) have been proposed as a viable
alternative to traditional simulation-driven approaches for solving
high-dimensional eigenvalue equations (HDEs), which are the foundation for many
scientific applications. Unfortunately, for the learned models in these
scientific applications to achieve generalization, a large, diverse, and
preferably annotated dataset is typically needed and is computationally
expensive to obtain. Furthermore, the learned models tend to be memory- and
compute-intensive primarily due to the size of the output layer. While
generalization, especially extrapolation, with scarce data has been attempted
by imposing physical constraints in the form of physics loss, the problem of
model scalability has remained.
In this paper, we alleviate the compute bottleneck in the output layer by
using physics knowledge to decompose the complex regression task of predicting
the high-dimensional eigenvectors into multiple simpler sub-tasks, each of
which are learned by a simple "expert" network. We call the resulting
architecture of specialized experts Physics-Guided Mixture-of-Experts (PG-MoE).
We demonstrate the efficacy of such physics-guided problem decomposition for
the case of the Schr\"{o}dinger's Equation in Quantum Mechanics. Our proposed
PG-MoE model predicts the ground-state solution, i.e., the eigenvector that
corresponds to the smallest possible eigenvalue. The model is 150x smaller than
the network trained to learn the complex task while being competitive in
generalization. To improve the generalization of the PG-MoE, we also employ a
physics-guided loss function based on variational energy, which by quantum
mechanics principles is minimized iff the output is the ground-state solution.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Physics-Informed Graph-Mesh Networks for PDEs: A hybrid approach for complex problems [0.24578723416255746]
We introduce a hybrid approach combining physics-informed graph neural networks with numerical kernels from finite elements.
After studying the theoretical properties of our model, we apply it to complex geometries, in two and three dimensions.
Our choices are supported by an ablation study, and we evaluate the generalisation capacity of the proposed approach.
arXiv Detail & Related papers (2024-09-25T07:52:29Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Applying physics-based loss functions to neural networks for improved
generalizability in mechanics problems [3.655021726150368]
Informed Machine Learning (PIML) has gained momentum in the last 5 years with scientists and researchers to utilize the benefits afforded by advances in machine learning.
In this work a new approach to utilizing PIML is discussed that deals with the use of physics-based loss functions.
arXiv Detail & Related papers (2021-04-30T20:31:09Z) - Kohn-Sham equations as regularizer: building prior knowledge into
machine-learned physics [13.572347341147282]
We show that solving the Kohn-Sham equations when training neural networks for the exchange-correlation functional provides an implicit regularization that greatly improves generalization.
Our models also generalize to unseen types of molecules and overcome self-interaction error.
arXiv Detail & Related papers (2020-09-17T23:06:39Z) - Physics Informed Deep Learning for Transport in Porous Media. Buckley
Leverett Problem [0.0]
We present a new hybrid physics-based machine-learning approach to reservoir modeling.
The methodology relies on a series of deep adversarial neural network architecture with physics-based regularization.
The proposed methodology is a simple and elegant way to instill physical knowledge to machine-learning algorithms.
arXiv Detail & Related papers (2020-01-15T08:20:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.