Deep Learning and Symbolic Regression for Discovering Parametric
Equations
- URL: http://arxiv.org/abs/2207.00529v2
- Date: Sun, 28 May 2023 15:10:44 GMT
- Title: Deep Learning and Symbolic Regression for Discovering Parametric
Equations
- Authors: Michael Zhang, Samuel Kim, Peter Y. Lu, Marin Solja\v{c}i\'c
- Abstract summary: We propose a neural network architecture to extend symbolic regression to parametric systems.
We demonstrate our method on various analytic expressions, ODEs, and PDEs with varying coefficients.
We integrate our architecture with convolutional neural networks to analyze 1D images of varying spring systems.
- Score: 5.103519975854401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symbolic regression is a machine learning technique that can learn the
governing formulas of data and thus has the potential to transform scientific
discovery. However, symbolic regression is still limited in the complexity and
dimensionality of the systems that it can analyze. Deep learning on the other
hand has transformed machine learning in its ability to analyze extremely
complex and high-dimensional datasets. We propose a neural network architecture
to extend symbolic regression to parametric systems where some coefficient may
vary but the structure of the underlying governing equation remains constant.
We demonstrate our method on various analytic expressions, ODEs, and PDEs with
varying coefficients and show that it extrapolates well outside of the training
domain. The neural network-based architecture can also integrate with other
deep learning architectures so that it can analyze high-dimensional data while
being trained end-to-end. To this end we integrate our architecture with
convolutional neural networks to analyze 1D images of varying spring systems.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Polynomial-Spline Neural Networks with Exact Integrals [0.0]
We develop a novel neural network architecture that combines a mixture-of-experts model with free knot B1-spline basis functions.
Our architecture exhibits both $h$- and $p$- refinement for regression problems at the convergence rates expected from approximation theory.
We demonstrate the success of our network on a range of regression and variational problems that illustrate the consistency and exact integrability of our network architecture.
arXiv Detail & Related papers (2021-10-26T22:12:37Z) - Discrete-Valued Neural Communication [85.3675647398994]
We show that restricting the transmitted information among components to discrete representations is a beneficial bottleneck.
Even though individuals have different understandings of what a "cat" is based on their specific experiences, the shared discrete token makes it possible for communication among individuals to be unimpeded by individual differences in internal representation.
We extend the quantization mechanism from the Vector-Quantized Variational Autoencoder to multi-headed discretization with shared codebooks and use it for discrete-valued neural communication.
arXiv Detail & Related papers (2021-07-06T03:09:25Z) - A deep learning theory for neural networks grounded in physics [2.132096006921048]
We argue that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them.
Our framework applies to a very broad class of models, namely systems whose state or dynamics are described by variational equations.
arXiv Detail & Related papers (2021-03-18T02:12:48Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Partial Differential Equations is All You Need for Generating Neural
Architectures -- A Theory for Physical Artificial Intelligence Systems [29.667065357274385]
We generalize the reaction-diffusion equation in statistical physics, Schr"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics.
We take finite difference method to discretize NPDE for finding numerical solution.
Basic building blocks of deep neural network architecture, including multi-layer perceptron, convolutional neural network and recurrent neural networks, are generated.
arXiv Detail & Related papers (2021-03-10T00:05:46Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Learning Variational Data Assimilation Models and Solvers [34.22350850350653]
We introduce end-to-end neural network architectures for data assimilation.
A key feature of the proposed end-to-end learning architecture is that we may train the NN models using both supervised and unsupervised strategies.
arXiv Detail & Related papers (2020-07-25T14:28:48Z) - Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example [14.91507266777207]
We show that a recurrent neural network can learn to modify its representation of complex information using only examples.
We provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations.
arXiv Detail & Related papers (2020-05-03T20:51:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.