Spherical Analysis of Learning Nonlinear Functionals
- URL: http://arxiv.org/abs/2410.01047v1
- Date: Tue, 1 Oct 2024 20:10:00 GMT
- Title: Spherical Analysis of Learning Nonlinear Functionals
- Authors: Zhenyu Yang, Shuo Huang, Han Feng, Ding-Xuan Zhou,
- Abstract summary: In this paper, we consider functionals defined on sets of functions on spheres.
The approximation ability of deep ReLU neural networks is investigated using an encoder-decoder framework.
- Score: 10.785977740158193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been growing interest in the field of functional neural networks. They have been proposed and studied with the aim of approximating continuous functionals defined on sets of functions on Euclidean domains. In this paper, we consider functionals defined on sets of functions on spheres. The approximation ability of deep ReLU neural networks is investigated by novel spherical analysis using an encoder-decoder framework. An encoder comes up first to accommodate the infinite-dimensional nature of the domain of functionals. It utilizes spherical harmonics to help us extract the latent finite-dimensional information of functions, which in turn facilitates in the next step of approximation analysis using fully connected neural networks. Moreover, real-world objects are frequently sampled discretely and are often corrupted by noise. Therefore, encoders with discrete input and those with discrete and random noise input are constructed, respectively. The approximation rates with different encoder structures are provided therein.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Approximation of RKHS Functionals by Neural Networks [30.42446856477086]
We study the approximation of functionals on kernel reproducing Hilbert spaces (RKHS's) using neural networks.
We derive explicit error bounds for those induced by inverse multiquadric, Gaussian, and Sobolev kernels.
We apply our findings to functional regression, proving that neural networks can accurately approximate the regression maps.
arXiv Detail & Related papers (2024-03-18T18:58:23Z) - Feature Mapping in Physics-Informed Neural Networks (PINNs) [1.9819034119774483]
We study the training dynamics of PINNs with a feature mapping layer via the limiting Conjugate Kernel and Neural Tangent Kernel.
We propose conditionally positive definite Radial Basis Function as a better alternative.
arXiv Detail & Related papers (2024-02-10T13:51:09Z) - Nonlinear functional regression by functional deep neural network with
kernel embedding [20.306390874610635]
We propose a functional deep neural network with an efficient and fully data-dependent dimension reduction method.
The architecture of our functional net consists of a kernel embedding step, a projection step, and a deep ReLU neural network for the prediction.
The utilization of smooth kernel embedding enables our functional net to be discretization invariant, efficient, and robust to noisy observations.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Physics-informed neural wavefields with Gabor basis functions [4.07926531936425]
We propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions.
Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an Gabor layer constituting the final hidden layer.
These/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions.
arXiv Detail & Related papers (2023-10-16T17:30:33Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.