Low rank compression in the numerical solution of the nonequilibrium
Dyson equation
- URL: http://arxiv.org/abs/2010.06511v3
- Date: Thu, 25 Feb 2021 18:38:00 GMT
- Title: Low rank compression in the numerical solution of the nonequilibrium
Dyson equation
- Authors: Jason Kaye, Denis Gole\v{z}
- Abstract summary: We propose a method to improve the computational and memory efficiency of numerical solvers for the nonequilibrium Dyson equation in the Keldysh formalism.
It is based on the empirical observation that the nonequilibrium Green's functions and self energies arising in many problems of physical interest, discretized as matrices, have low rank off-diagonal blocks.
We describe an efficient algorithm to build this compressed representation on the fly during the course of time stepping, and use the representation to reduce the cost of computing history integrals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to improve the computational and memory efficiency of
numerical solvers for the nonequilibrium Dyson equation in the Keldysh
formalism. It is based on the empirical observation that the nonequilibrium
Green's functions and self energies arising in many problems of physical
interest, discretized as matrices, have low rank off-diagonal blocks, and can
therefore be compressed using a hierarchical low rank data structure. We
describe an efficient algorithm to build this compressed representation on the
fly during the course of time stepping, and use the representation to reduce
the cost of computing history integrals, which is the main computational
bottleneck. For systems with the hierarchical low rank property, our method
reduces the computational complexity of solving the nonequilibrium Dyson
equation from cubic to near quadratic, and the memory complexity from quadratic
to near linear. We demonstrate the full solver for the Falicov-Kimball model
exposed to a rapid ramp and Floquet driving of system parameters, and are able
to increase feasible propagation times substantially. We present examples with
262144 time steps, which would require approximately five months of computing
time and 2.2 TB of memory using the direct time stepping method, but can be
completed in just over a day on a laptop with less than 4 GB of memory using
our method. We also confirm the hierarchical low rank property for the driven
Hubbard model in the weak coupling regime within the GW approximation, and in
the strong coupling regime within dynamical mean-field theory.
Related papers
- Solving the Transient Dyson Equation with Quasilinear Complexity via Matrix Compression [0.0]
We introduce a numerical strategy to efficiently solve the out-of-equilibrium Dyson equation in the transient regime.
We achieve significant improvements in computational efficiency, which result in quasi-linear scaling of both time and space complexity with propagation time.
arXiv Detail & Related papers (2024-10-14T20:05:05Z) - Solving Fractional Differential Equations on a Quantum Computer: A Variational Approach [0.1492582382799606]
We introduce an efficient variational hybrid quantum-classical algorithm designed for solving Caputo time-fractional partial differential equations.
Our results indicate that solution fidelity is insensitive to the fractional index and that gradient evaluation cost scales economically with the number of time steps.
arXiv Detail & Related papers (2024-06-13T02:27:16Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Adapting reservoir computing to solve the Schr\"odinger equation [0.0]
Reservoir computing is a machine learning algorithm that excels at predicting the evolution of time series.
We adapt this methodology to integrate the time-dependent Schr"odinger equation, propagating an initial wavefunction in time.
arXiv Detail & Related papers (2022-02-12T19:28:11Z) - COSMIC: fast closed-form identification from large-scale data for LTV
systems [4.10464681051471]
We introduce a closed-form method for identification of discrete-time linear timevariant systems from data.
We develop an algorithm with guarantees of optimality and with a complexity that increases linearly with the number of instants considered per trajectory.
Our algorithm was applied to both a Low Fidelity and Functional Engineering Simulators for the Comet Interceptor mission.
arXiv Detail & Related papers (2021-12-08T16:07:59Z) - Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om
Method [35.62926659320816]
We introduce Skyformer, which replaces the softmax structure with a Gaussian kernel to stabilize the model training and adapts the Nystr"om method to accelerate the computation.
Experiments on Long Range Arena benchmark show that the proposed method is sufficient in getting comparable or even better performance than the full self-attention.
arXiv Detail & Related papers (2021-10-29T18:28:49Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Covariance-Free Sparse Bayesian Learning [62.24008859844098]
We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
arXiv Detail & Related papers (2021-05-21T16:20:07Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.