Reconstructing spectral functions via automatic differentiation
- URL: http://arxiv.org/abs/2111.14760v1
- Date: Mon, 29 Nov 2021 18:09:49 GMT
- Title: Reconstructing spectral functions via automatic differentiation
- Authors: Lingxiao Wang, Shuzhe Shi, Kai Zhou
- Abstract summary: Reconstructing spectral functions from Euclidean Green's functions is an important inverse problem in many-body physics.
We propose an automatic differentiation(AD) framework as a generic tool for the spectral reconstruction from propagator observable.
- Score: 30.015034534260664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing spectral functions from Euclidean Green's functions is an
important inverse problem in many-body physics. However, the inversion is
proved to be ill-posed in the realistic systems with noisy Green's functions.
In this Letter, we propose an automatic differentiation(AD) framework as a
generic tool for the spectral reconstruction from propagator observable.
Exploiting the neural networks' regularization as a non-local smoothness
regulator of the spectral function, we represent spectral functions by neural
networks and use propagator's reconstruction error to optimize the network
parameters unsupervisedly. In the training process, except for the
positive-definite form for the spectral function, there are no other explicit
physical priors embedded into the neural networks. The reconstruction
performance is assessed through relative entropy and mean square error for two
different network representations. Compared to the maximum entropy method, the
AD framework achieves better performance in large-noise situation. It is noted
that the freedom of introducing non-local regularization is an inherent
advantage of the present framework and may lead to substantial improvements in
solving inverse problems.
Related papers
- Scalable spectral representations for network multiagent control [53.631272539560435]
A popular model for multi-agent control, Network Markov Decision Processes (MDPs) pose a significant challenge to efficient learning.
We first derive scalable spectral local representations for network MDPs, which induces a network linear subspace for the local $Q$-function of each agent.
We design a scalable algorithmic framework for continuous state-action network MDPs, and provide end-to-end guarantees for the convergence of our algorithm.
arXiv Detail & Related papers (2024-10-22T17:45:45Z) - Point-Calibrated Spectral Neural Operators [54.13671100638092]
We introduce Point-Calibrated Spectral Transform, which learns operator mappings by approximating functions with the point-level adaptive spectral basis.
Point-Calibrated Spectral Neural Operators learn operator mappings by approximating functions with the point-level adaptive spectral basis.
arXiv Detail & Related papers (2024-10-15T08:19:39Z) - Deep Learning without Global Optimization by Random Fourier Neural Networks [0.0]
We introduce a new training algorithm for variety of deep neural networks that utilize random complex exponential activation functions.
Our approach employs a Markov Chain Monte Carlo sampling procedure to iteratively train network layers.
It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions.
arXiv Detail & Related papers (2024-07-16T16:23:40Z) - Nonlinear functional regression by functional deep neural network with
kernel embedding [20.306390874610635]
We propose a functional deep neural network with an efficient and fully data-dependent dimension reduction method.
The architecture of our functional net consists of a kernel embedding step, a projection step, and a deep ReLU neural network for the prediction.
The utilization of smooth kernel embedding enables our functional net to be discretization invariant, efficient, and robust to noisy observations.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - A Lifted Bregman Formulation for the Inversion of Deep Neural Networks [28.03724379169264]
We propose a novel framework for the regularised inversion of deep neural networks.
The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables.
We present theoretical results and support their practical application with numerical examples.
arXiv Detail & Related papers (2023-03-01T20:30:22Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Neural network approach to reconstructing spectral functions and complex
poles of confined particles [0.0]
Reconstructing spectral functions from propagator data is difficult.
Recent work has proposed using neural networks to solve this problem.
We generalize this approach by not only reconstructing spectral functions, but also (possible) pairs of complex poles or an infrared (IR) cutoff.
arXiv Detail & Related papers (2022-03-07T11:13:30Z) - Automatic differentiation approach for reconstructing spectral functions
with neural networks [30.015034534260664]
We propose an automatic differentiation framework as a generic tool for the reconstruction from observable data.
We represent the spectra by neural networks and set chi-square as loss function to optimize the parameters with backward automatic differentiation unsupervisedly.
The reconstruction accuracy is assessed through Kullback-Leibler(KL) divergence and mean square error(MSE) at multiple noise levels.
arXiv Detail & Related papers (2021-12-12T11:21:57Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.