An implicit split-operator algorithm for the nonlinear time-dependent
Schr\"{o}dinger equation
- URL: http://arxiv.org/abs/2109.10630v2
- Date: Thu, 11 Nov 2021 10:21:56 GMT
- Title: An implicit split-operator algorithm for the nonlinear time-dependent
Schr\"{o}dinger equation
- Authors: Julien Roulet, Ji\v{r}\'i Van\'i\v{c}ek
- Abstract summary: The explicit split-operator algorithm is often used for solving the linear and nonlinear time-dependent Schr"odinger equations.
We describe a family of high-order implicit splitoperator algorithms that are norm-conserving, time-reversible, and very efficient.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explicit split-operator algorithm is often used for solving the linear
and nonlinear time-dependent Schr\"{o}dinger equations. However, when applied
to certain nonlinear time-dependent Schr\"{o}dinger equations, this algorithm
loses time reversibility and second-order accuracy, which makes it very
inefficient. Here, we propose to overcome the limitations of the explicit
split-operator algorithm by abandoning its explicit nature. We describe a
family of high-order implicit split-operator algorithms that are
norm-conserving, time-reversible, and very efficient. The geometric properties
of the integrators are proven analytically and demonstrated numerically on the
local control of a two-dimensional model of retinal. Although they are only
applicable to separable Hamiltonians, the implicit split-operator algorithms
are, in this setting, more efficient than the recently proposed integrators
based on the implicit midpoint method.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Efficient distributed representations with linear-time attention scores normalization [3.8673630752805437]
We propose a linear-time approximation of the attention score normalization constants for embedding vectors with bounded norms.
The accuracy of our estimation formula surpasses competing kernel methods by even orders of magnitude.
The proposed algorithm is highly interpretable and easily adapted to an arbitrary embedding problem.
arXiv Detail & Related papers (2023-03-30T15:48:26Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Algorithmic Solution for Systems of Linear Equations, in
$\mathcal{O}(mn)$ time [0.0]
We present a novel algorithm attaining excessively fast, the sought solution of linear systems of equations.
The execution time is very short compared with state-of-the-art methods.
The paper also comprises a theoretical proof for the algorithmic convergence.
arXiv Detail & Related papers (2021-04-26T13:40:31Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Time-reversible and norm-conserving high-order integrators for the
nonlinear time-dependent Schr\"{o}dinger equation: Application to local
control theory [0.0]
We present high-order geometric suitable for general time-dependent nonlinear Schr"odinger equations.
These compositions, based on the symmetric midpoint implicit method, are both norm-conserving and time-reversible.
arXiv Detail & Related papers (2020-06-30T15:27:58Z) - Explicit Regularization of Stochastic Gradient Methods through Duality [9.131027490864938]
We propose randomized Dykstra-style algorithms based on randomized dual coordinate ascent.
For accelerated coordinate descent, we obtain a new algorithm that has better convergence properties than existing gradient methods in the interpolating regime.
arXiv Detail & Related papers (2020-03-30T20:44:56Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.