An Efficient Decomposition of the Carleman Linearized Burgers' Equation
- URL: http://arxiv.org/abs/2505.00285v2
- Date: Thu, 26 Jun 2025 15:07:26 GMT
- Title: An Efficient Decomposition of the Carleman Linearized Burgers' Equation
- Authors: Reuben Demirdjian, Thomas Hogancamp, Daniel Gunlycke,
- Abstract summary: We use the Carleman linearization method to map the nonlinear Burgers' equation into an infinite linear system of equations.<n>This new finite linear system is embedded into a larger system of equations with the key property that its matrix can be decomposed into a linear combination.<n>This is the first efficient data loading method of a Carleman linearized system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Herein, we present a polylogarithmic decomposition method to load the matrix from the linearized 1-dimensional Burgers' equation onto a quantum computer. First, we use the Carleman linearization method to map the nonlinear Burgers' equation into an infinite linear system of equations, which is subsequently truncated to order $\alpha$. This new finite linear system is then embedded into a larger system of equations with the key property that its matrix can be decomposed into a linear combination of $\mathcal{O}(\log n_t + \alpha^2\log n_x)$ terms for $n_t$ time steps and $n_x$ spatial grid points. While the terms in this linear combination are not unitary, each is implemented with a simple block encoding and the variational quantuam linear solver (VQLS) routine may be used to obtain a solution. Finally, a complexity analysis of the required VQLS circuits shows that the upper bound of the two-qubit gate depth among all of the block encoded matrices is $\mathcal{O}(\alpha(\log n_x)^2)$. This is therefore the first efficient data loading method of a Carleman linearized system.
Related papers
- Block encoding of sparse matrices with a periodic diagonal structure [67.45502291821956]
We provide an explicit quantum circuit for block encoding a sparse matrix with a periodic diagonal structure.<n>Various applications for the presented methodology are discussed in the context of solving differential problems.
arXiv Detail & Related papers (2026-02-11T07:24:33Z) - Quantum Kaczmarz Algorithm for Solving Linear Algebraic Equations [0.0]
We introduce a quantum linear system solving algorithm based on the Kaczmarz method.<n>Its simplicity and low memory cost make it a practical choice across data regression, tomographic reconstruction, and optimization.
arXiv Detail & Related papers (2026-01-04T03:13:36Z) - Efficient quantum algorithm for solving differential equations with Fourier nonlinearity via Koopman linearization [0.0]
We construct an efficient quantum algorithm for solving ODEs with Fourier' nonlinear terms expressible as $dbf u/dt.<n>We also make other methodological advances towards relaxing the stringent dissipativity condition required for efficient solution extraction.
arXiv Detail & Related papers (2025-12-06T16:29:06Z) - Explicit Discovery of Nonlinear Symmetries from Dynamic Data [50.20526548924647]
LieNLSD is the first method capable of determining the number of infinitesimal generators with nonlinear terms and their explicit expressions.<n>LieNLSD shows qualitative advantages over existing methods and improves the long rollout accuracy of neural PDE solvers by over 20%.
arXiv Detail & Related papers (2025-10-02T09:54:08Z) - Variational quantum algorithm for the Poisson equation based on the banded Toeplitz systems [4.7487511537612335]
We give a variational quantum algorithm for solving the discreted Poisson equation.<n>We decompose the matrix $A$ and $A2$ into a linear combination of the corresponding banded Toeplitz matrix.<n>Based on the decomposition of the matrix, we design quantum circuits that evaluate efficiently the cost function.
arXiv Detail & Related papers (2025-04-21T03:07:49Z) - Provable Acceleration of Nesterov's Accelerated Gradient for Rectangular Matrix Factorization and Linear Neural Networks [46.04785603483612]
We prove that Nesterov's accelerated gradient attains an complexity $O(kappalogfrac1epsilon)$.
In particular, we prove that NAG can also attain an accelerated linear convergence rate.
arXiv Detail & Related papers (2024-10-12T20:33:37Z) - Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
We consider the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system.
Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate method.
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Vectorization of the density matrix and quantum simulation of the von
Neumann equation of time-dependent Hamiltonians [65.268245109828]
We develop a general framework to linearize the von-Neumann equation rendering it in a suitable form for quantum simulations.
We show that one of these linearizations of the von-Neumann equation corresponds to the standard case in which the state vector becomes the column stacked elements of the density matrix.
A quantum algorithm to simulate the dynamics of the density matrix is proposed.
arXiv Detail & Related papers (2023-06-14T23:08:51Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Quantum Algorithm for Solving a Quadratic Nonlinear System of Equations [0.22940141855172036]
The complexity of our algorithm is $O(rm polylog(n/epsilon))$, which provides an exponential improvement over the optimal classical algorithm in dimension $n$.
Our algorithm exponentially accelerates the solution of QNSE and has wide applications in all kinds of nonlinear problems.
arXiv Detail & Related papers (2021-12-03T00:27:16Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Improving quantum linear system solvers via a gradient descent
perspective [3.0969191504482247]
We revisit quantum linear system solvers from the perspective of convex optimization.
This leads to a considerable constant-factor iteration in the runtime.
We show how the optimal quantum linear system solver of Childs, Kothari, and Somma is related to the gradient descent algorithm.
arXiv Detail & Related papers (2021-09-09T13:16:28Z) - Projection-free Graph-based Classifier Learning using Gershgorin Disc
Perfect Alignment [59.87663954467815]
In graph-based binary learning, a subset of known labels $hatx_i$ are used to infer unknown labels.
When restricting labels $x_i$ to binary values, the problem is NP-hard.
We propose a fast projection-free method by solving a sequence of linear programs (LP) instead.
arXiv Detail & Related papers (2021-06-03T07:22:48Z) - Quantum-classical algorithms for skewed linear systems with optimized
Hadamard test [10.386115383285288]
We discuss hybrid quantum-classical algorithms for skewed linear systems for over-determined and under-determined cases.
Our input model is such that the columns or rows of the matrix defining the linear system are given via quantum circuits of poly-logarithmic depth.
We present an algorithm for the special case of a factorized linear system with run time poly-logarithmic in the respective dimensions.
arXiv Detail & Related papers (2020-09-28T12:59:27Z) - Learning nonlinear dynamical systems from a single trajectory [102.60042167341956]
We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
arXiv Detail & Related papers (2020-04-30T10:42:48Z) - Private Learning of Halfspaces: Simplifying the Construction and
Reducing the Sample Complexity [63.29100726064574]
We present a differentially private learner for halfspaces over a finite grid $G$ in $mathbbRd$ with sample complexity $approx d2.5cdot 2log*|G|$.
The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem.
arXiv Detail & Related papers (2020-04-16T16:12:10Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.