Neural Networks Asymptotic Behaviours for the Resolution of Inverse
Problems
- URL: http://arxiv.org/abs/2402.09338v2
- Date: Thu, 15 Feb 2024 12:07:13 GMT
- Title: Neural Networks Asymptotic Behaviours for the Resolution of Inverse
Problems
- Authors: Luigi Del Debbio, Manuel Naviglio, Francesco Tarantelli
- Abstract summary: This paper presents a study of the effectiveness of Neural Network (NN) techniques for deconvolution inverse problems.
We consider NNs limits, corresponding to Gaussian Processes (GPs), where non-linearities in the parameters of the NN can be neglected.
We address the deconvolution inverse problem in the case of a quantum harmonic oscillator simulated through Monte Carlo techniques on a lattice.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a study of the effectiveness of Neural Network (NN)
techniques for deconvolution inverse problems relevant for applications in
Quantum Field Theory, but also in more general contexts. We consider NN's
asymptotic limits, corresponding to Gaussian Processes (GPs), where
non-linearities in the parameters of the NN can be neglected. Using these
resulting GPs, we address the deconvolution inverse problem in the case of a
quantum harmonic oscillator simulated through Monte Carlo techniques on a
lattice. In this simple toy model, the results of the inversion can be compared
with the known analytical solution. Our findings indicate that solving the
inverse problem with a NN yields less performing results than those obtained
using the GPs derived from NN's asymptotic limits. Furthermore, we observe the
trained NN's accuracy approaching that of GPs with increasing layer width.
Notably, one of these GPs defies interpretation as a probabilistic model,
offering a novel perspective compared to established methods in the literature.
Our results suggest the need for detailed studies of the training dynamics in
more realistic set-ups.
Related papers
- ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - On the Convergence Analysis of Over-Parameterized Variational Autoencoders: A Neural Tangent Kernel Perspective [7.580900499231056]
Variational Auto-Encoders (VAEs) have emerged as powerful probabilistic models for generative tasks.
This paper provides a mathematical proof of VAE under mild assumptions.
We also establish a novel connection between the optimization problem faced by over-Eized SNNs and the Kernel Ridge (KRR) problem.
arXiv Detail & Related papers (2024-09-09T06:10:31Z) - General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Deep Quantum Neural Networks are Gaussian Process [0.0]
We present a framework to examine the impact of finite width in the closed-form relationship using a $ 1/d$ expansion.
We elucidate the relationship between GP and its parameter space equivalent, characterized by the Quantum Neural Tangent Kernels (QNTK)
arXiv Detail & Related papers (2023-05-22T03:07:43Z) - Spherical Inducing Features for Orthogonally-Decoupled Gaussian
Processes [7.4468224549568705]
Gaussian processes (GPs) are often compared unfavorably to deep neural networks (NNs) for lacking the ability to learn representations.
Recent efforts to bridge the gap between GPs and deep NNs have yielded a new class of inter-domain variational GPs in which the inducing variables correspond to hidden units of a feedforward NN.
arXiv Detail & Related papers (2023-04-27T09:00:02Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Neural Networks and Quantum Field Theory [0.0]
We propose a theoretical understanding of neural networks in terms of Wilsonian effective field theory.
The correspondence relies on the fact that many neural networks are drawn from Gaussian processes.
arXiv Detail & Related papers (2020-08-19T18:00:06Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.