Automatic differentiation approach for reconstructing spectral functions
with neural networks
- URL: http://arxiv.org/abs/2112.06206v1
- Date: Sun, 12 Dec 2021 11:21:57 GMT
- Title: Automatic differentiation approach for reconstructing spectral functions
with neural networks
- Authors: Lingxiao Wang, Shuzhe Shi, Kai Zhou
- Abstract summary: We propose an automatic differentiation framework as a generic tool for the reconstruction from observable data.
We represent the spectra by neural networks and set chi-square as loss function to optimize the parameters with backward automatic differentiation unsupervisedly.
The reconstruction accuracy is assessed through Kullback-Leibler(KL) divergence and mean square error(MSE) at multiple noise levels.
- Score: 30.015034534260664
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reconstructing spectral functions from Euclidean Green's functions is an
important inverse problem in physics. The prior knowledge for specific physical
systems routinely offers essential regularization schemes for solving the
ill-posed problem approximately. Aiming at this point, we propose an automatic
differentiation framework as a generic tool for the reconstruction from
observable data. We represent the spectra by neural networks and set chi-square
as loss function to optimize the parameters with backward automatic
differentiation unsupervisedly. In the training process, there is no explicit
physical prior embedding into neural networks except the positive-definite
form. The reconstruction accuracy is assessed through Kullback-Leibler(KL)
divergence and mean square error(MSE) at multiple noise levels. It should be
noted that the automatic differential framework and the freedom of introducing
regularization are inherent advantages of the present approach and may lead to
improvements of solving inverse problem in the future.
Related papers
- Nonlinear functional regression by functional deep neural network with
kernel embedding [20.306390874610635]
We propose a functional deep neural network with an efficient and fully data-dependent dimension reduction method.
The architecture of our functional net consists of a kernel embedding step, a projection step, and a deep ReLU neural network for the prediction.
The utilization of smooth kernel embedding enables our functional net to be discretization invariant, efficient, and robust to noisy observations.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Function-Space Regularization in Neural Networks: A Probabilistic
Perspective [51.133793272222874]
We show that we can derive a well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.
We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection and highly-calibrated predictive uncertainty estimates.
arXiv Detail & Related papers (2023-12-28T17:50:56Z) - Regularization, early-stopping and dreaming: a Hopfield-like setup to
address generalization and overfitting [0.0]
We look for optimal network parameters by applying a gradient descent over a regularized loss function.
Within this framework, the optimal neuron-interaction matrices correspond to Hebbian kernels revised by a reiterated unlearning protocol.
arXiv Detail & Related papers (2023-08-01T15:04:30Z) - A Lifted Bregman Formulation for the Inversion of Deep Neural Networks [28.03724379169264]
We propose a novel framework for the regularised inversion of deep neural networks.
The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables.
We present theoretical results and support their practical application with numerical examples.
arXiv Detail & Related papers (2023-03-01T20:30:22Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neural network approach to reconstructing spectral functions and complex
poles of confined particles [0.0]
Reconstructing spectral functions from propagator data is difficult.
Recent work has proposed using neural networks to solve this problem.
We generalize this approach by not only reconstructing spectral functions, but also (possible) pairs of complex poles or an infrared (IR) cutoff.
arXiv Detail & Related papers (2022-03-07T11:13:30Z) - Reconstructing spectral functions via automatic differentiation [30.015034534260664]
Reconstructing spectral functions from Euclidean Green's functions is an important inverse problem in many-body physics.
We propose an automatic differentiation(AD) framework as a generic tool for the spectral reconstruction from propagator observable.
arXiv Detail & Related papers (2021-11-29T18:09:49Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.