Learning Specialized Activation Functions for Physics-informed Neural
Networks
- URL: http://arxiv.org/abs/2308.04073v1
- Date: Tue, 8 Aug 2023 06:11:52 GMT
- Title: Learning Specialized Activation Functions for Physics-informed Neural
Networks
- Authors: Honghui Wang, Lu Lu, Shiji Song, Gao Huang
- Abstract summary: Physics-informed neural networks (PINNs) are known to suffer from optimization difficulty.
We show that PINNs exhibit high sensitivity to activation functions when solving PDEs with distinct properties.
We introduce adaptive activation functions to search for the optimal function when solving different problems.
- Score: 36.823376881651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) are known to suffer from
optimization difficulty. In this work, we reveal the connection between the
optimization difficulty of PINNs and activation functions. Specifically, we
show that PINNs exhibit high sensitivity to activation functions when solving
PDEs with distinct properties. Existing works usually choose activation
functions by inefficient trial-and-error. To avoid the inefficient manual
selection and to alleviate the optimization difficulty of PINNs, we introduce
adaptive activation functions to search for the optimal function when solving
different problems. We compare different adaptive activation functions and
discuss their limitations in the context of PINNs. Furthermore, we propose to
tailor the idea of learning combinations of candidate activation functions to
the PINNs optimization, which has a higher requirement for the smoothness and
diversity on learned functions. This is achieved by removing activation
functions which cannot provide higher-order derivatives from the candidate set
and incorporating elementary functions with different properties according to
our prior knowledge about the PDE at hand. We further enhance the search space
with adaptive slopes. The proposed adaptive activation function can be used to
solve different PDE systems in an interpretable way. Its effectiveness is
demonstrated on a series of benchmarks. Code is available at
https://github.com/LeapLabTHU/AdaAFforPINNs.
Related papers
- Hi-fi functional priors by learning activations [1.0468715529145969]
We explore how trainable activations can accommodate higher-complexity priors and match intricate target function distributions.<n>Our empirical findings indicate that even BNNs with a single wide hidden layer when equipped with flexible trainable activation, can effectively achieve desired function-space priors.
arXiv Detail & Related papers (2025-08-12T12:09:22Z) - Learnable Activation Functions in Physics-Informed Neural Networks for Solving Partial Differential Equations [0.0]
We investigate the use of learnable activation functions in Physics-Informed Networks (PINNs) for solving Partial Differential Equations (PDEs)
We compare the efficacy of traditional Multilayer Perceptrons (MLPs) with fixed and learnable activations against Kolmogorov-Arnold Neural Networks (KANs)
The findings offer insights into the design of neural network architectures that balance training efficiency, convergence speed, and test accuracy for PDE solvers.
arXiv Detail & Related papers (2024-11-22T18:25:13Z) - High-Fidelity Transfer of Functional Priors for Wide Bayesian Neural Networks by Learning Activations [1.0468715529145969]
We show how trainable activations can accommodate complex function-space priors on BNNs.
We discuss critical learning challenges, including identifiability, loss construction, and symmetries.
Our empirical findings demonstrate that even BNNs with a single wide hidden layer, can effectively achieve high-fidelity function-space priors.
arXiv Detail & Related papers (2024-10-21T08:42:10Z) - Physics-Informed Neural Networks: Minimizing Residual Loss with Wide Networks and Effective Activations [5.731640425517324]
We show that under certain conditions, the residual loss of PINNs can be globally minimized by a wide neural network.
An activation function with well-behaved high-order derivatives plays a crucial role in minimizing the residual loss.
The established theory paves the way for designing and choosing effective activation functions for PINNs.
arXiv Detail & Related papers (2024-05-02T19:08:59Z) - Fractional Concepts in Neural Networks: Enhancing Activation Functions [0.6445605125467574]
This study integrates fractional calculus into neural networks by introducing fractional order derivatives (FDO) as tunable parameters in activation functions.
We evaluate these fractional activation functions on various datasets and network architectures, comparing their performance with traditional and new activation functions.
arXiv Detail & Related papers (2023-10-18T10:49:29Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Data-Driven Learning of Feedforward Neural Networks with Different
Activation Functions [0.0]
This work contributes to the development of a new data-driven method (D-DM) of feedforward neural networks (FNNs) learning.
arXiv Detail & Related papers (2021-07-04T18:20:27Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Discovering Parametric Activation Functions [17.369163074697475]
This paper proposes a technique for customizing activation functions automatically, resulting in reliable improvements in performance.
Experiments with four different neural network architectures on the CIFAR-10 and CIFAR-100 image classification datasets show that this approach is effective.
arXiv Detail & Related papers (2020-06-05T00:25:33Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.