$\mathscr{H}_2$ Model Reduction for Linear Quantum Systems
- URL: http://arxiv.org/abs/2411.07603v2
- Date: Wed, 20 Nov 2024 04:53:10 GMT
- Title: $\mathscr{H}_2$ Model Reduction for Linear Quantum Systems
- Authors: G. P. Wu, S. Xue, G. F. Zhang, I. R. Petersen,
- Abstract summary: An $mathscrH$ norm-based model reduction method is presented, which can obtain a physically realizable model with a reduced order.
Examples of active and passive linear quantum systems validate the efficacy of the proposed method.
- Score: 0.0
- License:
- Abstract: In this paper, an $\mathscr{H}_2$ norm-based model reduction method for linear quantum systems is presented, which can obtain a physically realizable model with a reduced order for closely approximating the original system. The model reduction problem is described as an optimization problem, whose objective is taken as an $\mathscr{H}_2$ norm of the difference between the transfer function of the original system and that of the reduced one. Different from classical model reduction problems, physical realizability conditions for guaranteeing that the reduced-order system is also a quantum system should be taken as nonlinear constraints in the optimization. To solve the optimization problem with such nonlinear constraints, we employ a matrix inequality approach to transform nonlinear inequality constraints into readily solvable linear matrix inequalities (LMIs) and nonlinear equality constraints, so that the optimization problem can be solved by a lifting variables approach. We emphasize that different from existing work, which only introduces a criterion to evaluate the performance after model reduction, we guide our method to obtain an optimal reduced model with respect to the $\mathscr{H}_2$ norm. In addition, the above approach for model reduction is extended to passive linear quantum systems. Finally, examples of active and passive linear quantum systems validate the efficacy of the proposed method.
Related papers
- Variational Quantum Framework for Nonlinear PDE Constrained Optimization Using Carleman Linearization [0.8704964543257243]
We present a novel variational quantum framework for nonlinear partial differential equation (PDE) constrained optimization problems.
We use Carleman linearization (CL) to transform a system of ordinary differential equations into a system of infinite but linear system of ODE.
We present detailed computational error and complexity analysis and prove that under suitable assumptions, our proposed framework can provide potential advantage over classical techniques.
arXiv Detail & Related papers (2024-10-17T15:51:41Z) - A Two-Stage Training Method for Modeling Constrained Systems With Neural
Networks [3.072340427031969]
This paper describes in detail the two-stage training method for Neural ODEs.
The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation.
The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region.
arXiv Detail & Related papers (2024-03-05T07:37:47Z) - Further improving quantum algorithms for nonlinear differential
equations via higher-order methods and rescaling [0.0]
We present three main improvements to existing quantum algorithms based on the Carleman linearisation technique.
By using a high-precision technique for the solution of the linearised differential equations, we achieve logarithmic dependence of the complexity on the error and near-linear dependence on time.
A rescaling technique can considerably reduce the cost, which would otherwise be exponential in the Carleman order for a system of ODEs.
arXiv Detail & Related papers (2023-12-15T03:52:44Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds [0.559239450391449]
We present two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems.
The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality.
arXiv Detail & Related papers (2023-05-24T18:23:25Z) - Optimization Induced Equilibrium Networks [76.05825996887573]
Implicit equilibrium models, i.e., deep neural networks (DNNs) defined by implicit equations, have been becoming more and more attractive recently.
We show that deep OptEq outperforms previous implicit models even with fewer parameters.
arXiv Detail & Related papers (2021-05-27T15:17:41Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Loss landscapes and optimization in over-parameterized non-linear
systems and neural networks [20.44438519046223]
We show that wide neural networks satisfy the PL$*$ condition, which explains the (S)GD convergence to a global minimum.
We show that wide neural networks satisfy the PL$*$ condition, which explains the (S)GD convergence to a global minimum.
arXiv Detail & Related papers (2020-02-29T17:18:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.