Interpretable Neural PDE Solvers using Symbolic Frameworks
- URL: http://arxiv.org/abs/2310.20463v2
- Date: Fri, 10 Nov 2023 12:15:33 GMT
- Title: Interpretable Neural PDE Solvers using Symbolic Frameworks
- Authors: Yolanne Yi Ran Lee
- Abstract summary: Partial differential equations (PDEs) are ubiquitous in the world around us, modelling phenomena from heat and sound to quantum systems.
Recent advances in deep learning have resulted in the development of powerful neural solvers.
However, a significant challenge remains in their interpretability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial differential equations (PDEs) are ubiquitous in the world around us,
modelling phenomena from heat and sound to quantum systems. Recent advances in
deep learning have resulted in the development of powerful neural solvers;
however, while these methods have demonstrated state-of-the-art performance in
both accuracy and computational efficiency, a significant challenge remains in
their interpretability. Most existing methodologies prioritize predictive
accuracy over clarity in the underlying mechanisms driving the model's
decisions. Interpretability is crucial for trustworthiness and broader
applicability, especially in scientific and engineering domains where neural
PDE solvers might see the most impact. In this context, a notable gap in
current research is the integration of symbolic frameworks (such as symbolic
regression) into these solvers. Symbolic frameworks have the potential to
distill complex neural operations into human-readable mathematical expressions,
bridging the divide between black-box predictions and solutions.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Disentangled Representation Learning for Parametric Partial Differential Equations [31.240283037552427]
We propose a new paradigm for learning disentangled representations from neural operator parameters.
DisentangO is a novel hyper-neural operator architecture designed to unveil and disentangle the latent physical factors of variation embedded within the black-box neural operator parameters.
We show that DisentangO effectively extracts meaningful and interpretable latent features, bridging the divide between predictive performance and physical understanding in neural operator frameworks.
arXiv Detail & Related papers (2024-10-03T01:40:39Z) - Adversarial Learning for Neural PDE Solvers with Sparse Data [4.226449585713182]
This study introduces a universal learning strategy for neural network PDEs, named Systematic Model Augmentation for Robust Training.
By focusing on challenging and improving the model's weaknesses, SMART reduces generalization error during training under data-scarce conditions.
arXiv Detail & Related papers (2024-09-04T04:18:25Z) - Adapting Physics-Informed Neural Networks for Bifurcation Detection in Ecological Migration Models [0.16442870218029523]
In this study, we explore the application of Physics-Informed Neural Networks (PINNs) to the analysis of bifurcation phenomena in ecological migration models.
By integrating the fundamental principles of diffusion-advection-reaction equations with deep learning techniques, we address the complexities of species migration dynamics.
arXiv Detail & Related papers (2024-09-01T08:00:31Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.