Spacetime Neural Network for High Dimensional Quantum Dynamics
- URL: http://arxiv.org/abs/2108.02200v1
- Date: Wed, 4 Aug 2021 17:55:26 GMT
- Title: Spacetime Neural Network for High Dimensional Quantum Dynamics
- Authors: Jiangran Wang, Zhuo Chen, Di Luo, Zhizhen Zhao, Vera Mikyoung Hur,
Bryan K. Clark
- Abstract summary: We develop a spacetime neural network method with second order optimization for solving quantum dynamics.
We demonstrate the method in the Schr"odinger equation with a self-normalized autoregressive spacetime neural network construction.
- Score: 19.30780557470232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a spacetime neural network method with second order optimization
for solving quantum dynamics from the high dimensional Schr\"{o}dinger
equation. In contrast to the standard iterative first order optimization and
the time-dependent variational principle, our approach utilizes the implicit
mid-point method and generates the solution for all spatial and temporal values
simultaneously after optimization. We demonstrate the method in the
Schr\"{o}dinger equation with a self-normalized autoregressive spacetime neural
network construction. Future explorations for solving different high
dimensional differential equations are discussed.
Related papers
- Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.
In this study, we employ the Langevin equation with a QNG force to demonstrate that its discrete-time solution gives a generalized form, which we call Momentum-QNG.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Tensor-Valued Time and Inference Path Optimization in Differential Equation-Based Generative Modeling [16.874769609089764]
This work introduces, for the first time, a tensor-valued time that expands the conventional scalar-valued time into multiple dimensions.
We also propose a novel path optimization problem designed to adaptively determine multidimensional inference trajectories.
arXiv Detail & Related papers (2024-04-22T13:20:01Z) - Neural Time-Reversed Generalized Riccati Equation [60.92253836775246]
Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates.
This paper introduces a novel neural-based approach to optimal control, with the aim of working forward-in-time.
arXiv Detail & Related papers (2023-12-14T19:29:37Z) - Optimized continuous dynamical decoupling via differential geometry and machine learning [0.0]
We introduce a strategy to develop optimally designed fields for continuous dynamical decoupling.
We obtain the optimal continuous field configuration to maximize the fidelity of a general one-qubit quantum gate.
arXiv Detail & Related papers (2023-10-12T15:30:00Z) - Scalable Imaginary Time Evolution with Neural Network Quantum States [0.0]
The representation of a quantum wave function as a neural network quantum state (NQS) provides a powerful variational ansatz for finding the ground states of many-body quantum systems.
We introduce an approach that bypasses the computation of the metric tensor and instead relies exclusively on first-order descent with Euclidean metric.
We make this method adaptive and stable by determining the optimal time step and keeping the target fixed until the energy of the NQS decreases.
arXiv Detail & Related papers (2023-07-28T12:26:43Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - A scalable deep learning approach for solving high-dimensional dynamic
optimal transport [18.67654056717166]
We propose a deep learning based method to solve the dynamic optimal transport in high dimensional space.
Our method contains three main ingredients: a carefully designed representation of the velocity field, the discretization of the PDE constraint along the characteristics, and the computation of high dimensional integral by Monte Carlo method in each time step.
arXiv Detail & Related papers (2022-05-16T08:56:05Z) - Second-Order Neural ODE Optimizer [11.92713188431164]
We show that a specific continuous-time OC methodology, called Differential Programming, can be adopted to derive backward ODEs for higher-order derivatives at the same O(1) memory cost.
The resulting method converges much faster than first-order baselines in wall-clock time.
Our framework also enables direct architecture optimization, such as the integration time of Neural ODEs, with second-order feedback policies.
arXiv Detail & Related papers (2021-09-29T02:58:18Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.