Learning Discontinuous Galerkin Solutions to Elliptic Problems via Small Linear Convolutional Neural Networks
- URL: http://arxiv.org/abs/2502.08783v1
- Date: Wed, 12 Feb 2025 20:53:34 GMT
- Title: Learning Discontinuous Galerkin Solutions to Elliptic Problems via Small Linear Convolutional Neural Networks
- Authors: Adrian Celaya, Yimo Wang, David Fuentes, Beatrice Riviere,
- Abstract summary: We propose two approaches for learning discontinuous Galerkin solutions to PDEs using small linear convolutional neural networks.
Our first approach is supervised and depends on labeled data, while our second approach is unsupervised and does not rely on any training data.
In both cases, our methods use substantially fewer parameters than similar numerics-based neural networks while also demonstrating comparable accuracy to the true and DG solutions for elliptic problems.
- Score: 1.124958340749622
- License:
- Abstract: In recent years, there has been an increasing interest in using deep learning and neural networks to tackle scientific problems, particularly in solving partial differential equations (PDEs). However, many neural network-based methods, such as physics-informed neural networks, depend on automatic differentiation and the sampling of collocation points, which can result in a lack of interpretability and lower accuracy compared to traditional numerical methods. To address this issue, we propose two approaches for learning discontinuous Galerkin solutions to PDEs using small linear convolutional neural networks. Our first approach is supervised and depends on labeled data, while our second approach is unsupervised and does not rely on any training data. In both cases, our methods use substantially fewer parameters than similar numerics-based neural networks while also demonstrating comparable accuracy to the true and DG solutions for elliptic problems.
Related papers
- Are Two Hidden Layers Still Enough for the Physics-Informed Neural Networks? [0.0]
The article discusses the development of various methods and techniques for initializing and training neural networks with a single hidden layer.
The proposed methods have been extended to 2D problems using the separable physics-informed neural networks approach.
arXiv Detail & Related papers (2024-12-26T14:30:54Z) - Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations [3.8916312075738273]
We propose a novel approach called the Newton Informed Neural Operator to tackle nonlinearities.
Our method combines classical Newton methods, addressing well-posed problems, and efficiently learns multiple solutions in a single learning process.
arXiv Detail & Related papers (2024-05-23T01:52:54Z) - What to Do When Your Discrete Optimization Is the Size of a Neural
Network? [24.546550334179486]
Machine learning applications using neural networks involve solving discrete optimization problems.
classical approaches used in discrete settings do not scale well to large neural networks.
We take continuation path (CP) methods to represent using purely the former and Monte Carlo (MC) methods to represent the latter.
arXiv Detail & Related papers (2024-02-15T21:57:43Z) - Solutions to Elliptic and Parabolic Problems via Finite Difference Based Unsupervised Small Linear Convolutional Neural Networks [1.124958340749622]
We propose a fully unsupervised approach, requiring no training data, to estimate finite difference solutions for PDEs directly via small linear convolutional neural networks.
Our proposed approach uses substantially fewer parameters than similar finite difference-based approaches while also demonstrating comparable accuracy to the true solution for several selected elliptic and parabolic problems.
arXiv Detail & Related papers (2023-11-01T03:15:10Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.