Variational Quantum Framework for Nonlinear PDE Constrained Optimization Using Carleman Linearization
- URL: http://arxiv.org/abs/2410.13688v1
- Date: Thu, 17 Oct 2024 15:51:41 GMT
- Title: Variational Quantum Framework for Nonlinear PDE Constrained Optimization Using Carleman Linearization
- Authors: Abeynaya Gnanasekaran, Amit Surana, Hongyu Zhu,
- Abstract summary: We present a novel variational quantum framework for nonlinear partial differential equation (PDE) constrained optimization problems.
We use Carleman linearization (CL) to transform a system of ordinary differential equations into a system of infinite but linear system of ODE.
We present detailed computational error and complexity analysis and prove that under suitable assumptions, our proposed framework can provide potential advantage over classical techniques.
- Score: 0.8704964543257243
- License:
- Abstract: We present a novel variational quantum framework for nonlinear partial differential equation (PDE) constrained optimization problems. The proposed work extends the recently introduced bi-level variational quantum PDE constrained optimization (BVQPCO) framework for linear PDE to a nonlinear setting by leveraging Carleman linearization (CL). CL framework allows one to transform a system of polynomial ordinary differential equations (ODE), i,e. ODE with polynomial vector field, into an system of infinite but linear system of ODE. For instance, such polynomial ODEs naturally arise when the PDE are semi-discretized in the spatial dimensions. By truncating the CL system to a finite order, one obtains a finite system of linear ODE to which the linear BVQPCO framework can be applied. In particular, the finite system of linear ODE is discretized in time and embedded as a system of linear equations. The variational quantum linear solver (VQLS) is used to solve the linear system for given optimization parameters, and evaluate the design cost/objective function, and a classical black box optimizer is used to select next set of parameter values based on this evaluated cost. We present detailed computational error and complexity analysis and prove that under suitable assumptions, our proposed framework can provide potential advantage over classical techniques. We implement our framework using the PennyLane library and apply it to solve inverse Burgers' problem. We also explore an alternative tensor product decomposition which exploits the sparsity/structure of linear system arising from PDE discretization to facilitate the computation of VQLS cost functions.
Related papers
- $\mathscr{H}_2$ Model Reduction for Linear Quantum Systems [0.0]
An $mathscrH$ norm-based model reduction method is presented, which can obtain a physically realizable model with a reduced order.
Examples of active and passive linear quantum systems validate the efficacy of the proposed method.
arXiv Detail & Related papers (2024-11-12T07:25:21Z) - Partial-differential-algebraic equations of nonlinear dynamics by Physics-Informed Neural-Network: (I) Operator splitting and framework assessment [51.3422222472898]
Several forms for constructing novel physics-informed-networks (PINN) for the solution of partial-differential-algebraic equations are proposed.
Among these novel methods are the PDE forms, which evolve from the lower-level form with fewer unknown dependent variables to higher-level form with more dependent variables.
arXiv Detail & Related papers (2024-07-13T22:48:17Z) - Variational Quantum Framework for Partial Differential Equation Constrained Optimization [0.6138671548064355]
We present a novel variational quantum framework for PDE constrained optimization problems.
The proposed framework utilizes the variational quantum linear (VQLS) algorithm and a black box as its main building blocks.
arXiv Detail & Related papers (2024-05-26T18:06:43Z) - Further improving quantum algorithms for nonlinear differential
equations via higher-order methods and rescaling [0.0]
We present three main improvements to existing quantum algorithms based on the Carleman linearisation technique.
By using a high-precision technique for the solution of the linearised differential equations, we achieve logarithmic dependence of the complexity on the error and near-linear dependence on time.
A rescaling technique can considerably reduce the cost, which would otherwise be exponential in the Carleman order for a system of ODEs.
arXiv Detail & Related papers (2023-12-15T03:52:44Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Carleman linearization based efficient quantum algorithm for higher
order polynomial differential equations [2.707154152696381]
We present an efficient quantum algorithm to simulate nonlinear differential equations with vector fields of arbitrary degree on quantum platforms.
Models of physical systems governed by ordinary differential equations (ODEs) or partial differential equation (PDEs) can be challenging to solve on classical computers.
arXiv Detail & Related papers (2022-12-21T05:21:52Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Semi-Implicit Neural Solver for Time-dependent Partial Differential
Equations [4.246966726709308]
We propose a neural solver to learn an optimal iterative scheme in a data-driven fashion for any class of PDEs.
We provide theoretical guarantees for the correctness and convergence of neural solvers analogous to conventional iterative solvers.
arXiv Detail & Related papers (2021-09-03T12:03:10Z) - Speeding up Computational Morphogenesis with Online Neural Synthetic
Gradients [51.42959998304931]
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints.
These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach.
We propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme.
arXiv Detail & Related papers (2021-04-25T22:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.