A control method for solving high-dimensional Hamiltonian systems
through deep neural networks
- URL: http://arxiv.org/abs/2111.02636v1
- Date: Thu, 4 Nov 2021 05:22:08 GMT
- Title: A control method for solving high-dimensional Hamiltonian systems
through deep neural networks
- Authors: Shaolin Ji, Shige Peng, Ying Peng, Xichuan Zhang
- Abstract summary: We first introduce a corresponding optimal control problem such that the Hamiltonian system of control problem is exactly what we need to solve, then develop two different algorithms suitable for different cases of the control problem and approximate the control via deep neural networks.
From the numerical results, comparing with the Deep FBSDE method which was developed previously from the view of solving FBSDEs, the novel algorithms converge faster, which means that they require fewer training steps, and demonstrate more stable convergences for different Hamiltonian systems.
- Score: 0.2752817022620644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we mainly focus on solving high-dimensional stochastic
Hamiltonian systems with boundary condition, and propose a novel method from
the view of the stochastic control. In order to obtain the approximated
solution of the Hamiltonian system, we first introduce a corresponding
stochastic optimal control problem such that the Hamiltonian system of control
problem is exactly what we need to solve, then develop two different algorithms
suitable for different cases of the control problem and approximate the
stochastic control via deep neural networks. From the numerical results,
comparing with the Deep FBSDE method which was developed previously from the
view of solving FBSDEs, the novel algorithms converge faster, which means that
they require fewer training steps, and demonstrate more stable convergences for
different Hamiltonian systems.
Related papers
- A Simulation-Free Deep Learning Approach to Stochastic Optimal Control [12.699529713351287]
We propose a simulation-free algorithm for the solution of generic problems in optimal control (SOC)
Unlike existing methods, our approach does not require the solution of an adjoint problem.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - Generation of C-NOT, SWAP, and C-Z Gates for Two Qubits Using Coherent
and Incoherent Controls and Stochastic Optimization [56.47577824219207]
We consider a general form of the dynamics of open quantum systems determined by the Gorini-Kossakowsky-Sudarchhan-Lindblad type master equation.
We analyze the control problems of generating two-qubit C-NOT, SWAP, and C-Z gates using piecewise constant controls and optimization.
arXiv Detail & Related papers (2023-12-09T17:55:47Z) - Solving Elliptic Optimal Control Problems via Neural Networks and Optimality System [3.8704302640118864]
We investigate a neural network based solver for optimal control problems (without / with box constraint)
It employs deep neural networks to represent the solutions to the reduced system.
We present several numerical examples to illustrate the method and compare it with two existing ones.
arXiv Detail & Related papers (2023-08-23T05:18:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Optimal control for state preparation in two-qubit open quantum systems
driven by coherent and incoherent controls via GRAPE approach [77.34726150561087]
We consider a model of two qubits driven by coherent and incoherent time-dependent controls.
The dynamics of the system is governed by a Gorini-Kossakowski-Sudarshan-Lindblad master equation.
We study evolution of the von Neumann entropy, purity, and one-qubit reduced density matrices under optimized controls.
arXiv Detail & Related papers (2022-11-04T15:20:18Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Deep neural network approximation for high-dimensional parabolic
Hamilton-Jacobi-Bellman equations [5.863264019032882]
It is shown that for HJB equations that arise in the context of the optimal control of certain Markov processes the solution can be approximated by deep neural networks without incurring the curse of dimension.
arXiv Detail & Related papers (2021-03-09T22:34:13Z) - Solving stochastic optimal control problem via stochastic maximum
principle with deep learning method [0.2064612766965483]
Three algorithms are proposed to solve the new control problem.
An important application of this method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear PDEs.
arXiv Detail & Related papers (2020-07-05T02:28:43Z) - Single-step deep reinforcement learning for open-loop control of laminar
and turbulent flows [0.0]
This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems.
It combines a novel, "degenerate" version of the prototypical policy optimization (PPO) algorithm, that trains a neural network in optimizing the system only once per learning episode.
arXiv Detail & Related papers (2020-06-04T16:11:26Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.