Inference of response functions with the help of machine learning algorithms
- URL: http://arxiv.org/abs/2501.10583v1
- Date: Fri, 17 Jan 2025 22:21:41 GMT
- Title: Inference of response functions with the help of machine learning algorithms
- Authors: Doga Murat Kurkcuoglu, Alessandro Roggero, Gabriel N. Perdue, Rajan Gupta,
- Abstract summary: We employ a neural network prediction algorithm to reconstruct a response function $S(omega)$ defined over a range in $omega$.
We compare the quality of response functions obtained using coefficients calculated using a neural network (NN) algorithm with those computed using the Gaussian Integral Transform (GIT) method.
In the regime where only a small number of terms in the Chebyshev series are retained, we find that the NN scheme outperforms the GIT method.
- Score: 42.12937192948916
- License:
- Abstract: Response functions are a key quantity to describe the near-equilibrium dynamics of strongly-interacting many-body systems. Recent techniques that attempt to overcome the challenges of calculating these \emph{ab initio} have employed expansions in terms of orthogonal polynomials. We employ a neural network prediction algorithm to reconstruct a response function $S(\omega)$ defined over a range in frequencies $\omega$. We represent the calculated response function as a truncated Chebyshev series whose coefficients can be optimized to reduce the representation error. We compare the quality of response functions obtained using coefficients calculated using a neural network (NN) algorithm with those computed using the Gaussian Integral Transform (GIT) method. In the regime where only a small number of terms in the Chebyshev series are retained, we find that the NN scheme outperforms the GIT method.
Related papers
- Deep Learning without Global Optimization by Random Fourier Neural Networks [0.0]
We introduce a new training algorithm for variety of deep neural networks that utilize random complex exponential activation functions.
Our approach employs a Markov Chain Monte Carlo sampling procedure to iteratively train network layers.
It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions.
arXiv Detail & Related papers (2024-07-16T16:23:40Z) - Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - A Globally Convergent Algorithm for Neural Network Parameter
Optimization Based on Difference-of-Convex Functions [29.58728073957055]
We propose an algorithm for optimizing parameters of hidden layer networks.
Specifically, we derive a blockwise (DC-of-the-art) difference function.
arXiv Detail & Related papers (2024-01-15T19:53:35Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.