Neural Network Solutions to Differential Equations in Non-Convex
Domains: Solving the Electric Field in the Slit-Well Microfluidic Device
- URL: http://arxiv.org/abs/2004.12235v1
- Date: Sat, 25 Apr 2020 21:20:03 GMT
- Title: Neural Network Solutions to Differential Equations in Non-Convex
Domains: Solving the Electric Field in the Slit-Well Microfluidic Device
- Authors: Martin Magill and Andrew M. Nagel and Hendrick W. de Haan
- Abstract summary: The neural network method is used to approximate the electric potential and corresponding electric field in a slit-well microfluidic device.
metrics, deep neural networks significantly outperform shallow neural networks.
- Score: 1.7188280334580193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The neural network method of solving differential equations is used to
approximate the electric potential and corresponding electric field in the
slit-well microfluidic device. The device's geometry is non-convex, making this
a challenging problem to solve using the neural network method. To validate the
method, the neural network solutions are compared to a reference solution
obtained using the finite element method. Additional metrics are presented that
measure how well the neural networks recover important physical invariants that
are not explicitly enforced during training: spatial symmetries and
conservation of electric flux. Finally, as an application-specific test of
validity, neural network electric fields are incorporated into particle
simulations. Conveniently, the same loss functional used to train the neural
networks also seems to provide a reliable estimator of the networks' true
errors, as measured by any of the metrics considered here. In all metrics, deep
neural networks significantly outperform shallow neural networks, even when
normalized by computational cost. Altogether, the results suggest that the
neural network method can reliably produce solutions of acceptable accuracy for
use in subsequent physical computations, such as particle simulations.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Guaranteed Quantization Error Computation for Neural Network Model
Compression [2.610470075814367]
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems.
A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks.
arXiv Detail & Related papers (2023-04-26T20:21:54Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - A Gradient Estimator for Time-Varying Electrical Networks with
Non-Linear Dissipation [0.0]
We use electrical circuit theory to construct a Lagrangian capable of describing deep, directed neural networks.
We derive an estimator for the gradient of the physical parameters of the network, such as synapse conductances.
We conclude by suggesting methods for extending these results to networks of biologically plausible neurons.
arXiv Detail & Related papers (2021-03-09T02:07:39Z) - ResiliNet: Failure-Resilient Inference in Distributed Neural Networks [56.255913459850674]
We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures.
Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks.
arXiv Detail & Related papers (2020-02-18T05:58:24Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.