Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
- URL: http://arxiv.org/abs/2602.04239v1
- Date: Wed, 04 Feb 2026 05:57:27 GMT
- Title: Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
- Authors: Vanshaj Kerni, Abdelrahman E. Ahmed, Syed Ali Asghar,
- Abstract summary: We present a comparative benchmark of Quantum Networks (QTN), the Hydrodynamic Schrdinger Equation (Difference), and Physics-Informed Neural Networks (PINN) for simulating the 1D Burgers' equation.<n>We analyse solution accuracy, runtime scaling, and resource overhead across grid resolutions ranging from $N=4$ to $N=128$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a comparative benchmark of Quantum Tensor Networks (QTN), the Hydrodynamic Schrödinger Equation (HSE), and Physics-Informed Neural Networks (PINN) for simulating the 1D Burgers' equation. Evaluating these emerging paradigms against classical GMRES and Spectral baselines, we analyse solution accuracy, runtime scaling, and resource overhead across grid resolutions ranging from $N=4$ to $N=128$. Our results reveal a distinct performance hierarchy. The QTN solver achieves superior precision ($L_2 \sim 10^{-7}$) with remarkable near-constant runtime scaling, effectively leveraging entanglement compression to capture shock fronts. In contrast, while the Finite-Difference HSE implementation remains robust, the Spectral HSE method suffers catastrophic numerical instability at high resolutions, diverging significantly at $N=128$. PINNs demonstrate flexibility as mesh-free solvers but stall at lower accuracy tiers ($L_2 \sim 10^{-1}$), limited by spectral bias compared to grid-based methods. Ultimately, while quantum methods offer novel representational advantages for low-resolution fluid dynamics, this study confirms they currently yield no computational advantage over classical solvers without fault tolerance or significant algorithmic breakthroughs in handling non-linear feedback.
Related papers
- Layer-wise QUBO-Based Training of CNN Classifiers for Quantum Annealing [0.0]
We propose an iterative framework based on Quadratic Un Binary Optimization (QUBO) for training the head of convolutional neural networks (CNNs)<n>A per-output decomposition splits the $C$-class problem into $C$ independent QUBOs, each with $(d+1)K$ binary variables, where $d$ is the feature dimension and $K$ is the bit precision.<n>We evaluate the method on six image-classification benchmarks (sklearn digits, MNIST, Fashion-MNIST, CIFAR-10, EMNIST, KMNIST)
arXiv Detail & Related papers (2026-03-03T13:10:36Z) - Quantum-Enhanced Neural Contextual Bandit Algorithms [50.880384999888044]
This paper introduces the Quantum Neural Tangent Kernel-Upper Confidence Bound (QNTK-UCB) algorithm.<n>QNTK-UCB is a novel algorithm that leverages the Quantum Neural Tangent Kernel (QNTK) to address these limitations.
arXiv Detail & Related papers (2026-01-06T09:58:14Z) - Gradient Descent as a Perceptron Algorithm: Understanding Dynamics and Implicit Acceleration [67.12978375116599]
We show that the steps of gradient descent (GD) reduce to those of generalized perceptron algorithms.<n>This helps explain the optimization dynamics and the implicit acceleration phenomenon observed in neural networks.
arXiv Detail & Related papers (2025-12-12T14:16:35Z) - Neural PDE Solvers with Physics Constraints: A Comparative Study of PINNs, DRM, and WANs [1.131316248570352]
Partial equations (PDEs) underpin models across science and engineering, yet analytical solutions are atypical and classical mesh-based solvers can be costly in high dimensions.<n>This dissertation presents a unified comparison of three mesh-free neural PDE solvers, physics-informed neural networks (PINNs), the deep Ritz method (DRM), and weak adversarial networks (WANs), on Poisson problems (up to 5D) and the time-independent Schr"odinger equation in 1D/2D.
arXiv Detail & Related papers (2025-10-09T13:41:51Z) - Hybrid Quantum-Classical Neural Networks for Few-Shot Credit Risk Assessment [52.05742536403784]
This work tackles the challenge of few-shot credit risk assessment.<n>We design and implement a novel hybrid quantum-classical workflow.<n>A Quantum Neural Network (QNN) was trained via the parameter-shift rule.<n>On a real-world credit dataset of 279 samples, our QNN achieved a robust average AUC of 0.852 +/- 0.027 in simulations and yielded an impressive AUC of 0.88 in the hardware experiment.
arXiv Detail & Related papers (2025-09-17T08:36:05Z) - Convolution-weighting method for the physics-informed neural network: A Primal-Dual Optimization Perspective [14.65008276932511]
Physics-informed neural networks (PINNs) are extensively employed to solve partial differential equations (PDEs)<n>PINNs are typically optimized using a finite set of points, which poses significant challenges in guaranteeing their convergence and accuracy.<n>We propose a new weighting scheme that will adaptively change the weights to the loss functions from isolated points to their continuous neighborhood regions.
arXiv Detail & Related papers (2025-06-24T17:13:51Z) - On Quantum BSDE Solver for High-Dimensional Parabolic PDEs [8.072353085704627]
This study employs the pure Variational Quantum Circuit (VQC) as the core solver without trainable classical neural networks.<n>We benchmark VQCbased and classical deep neural network (DNN) solvers on two canonical PDEs as representatives.<n>The VQC achieves lower variance and improved accuracy in most cases, particularly in highly nonlinear regimes.
arXiv Detail & Related papers (2025-06-17T15:10:42Z) - Practical Application of the Quantum Carleman Lattice Boltzmann Method in Industrial CFD Simulations [44.99833362998488]
This work presents a practical numerical assessment of a hybrid quantum-classical approach to CFD based on the Lattice Boltzmann Method (LBM)<n>We evaluate this method on three benchmark cases featuring different boundary conditions, periodic, bounceback, and moving wall.<n>Our results confirm the validity of the approach, achieving median error fidelities on the order of $10-3$ and success probabilities sufficient for practical quantum state sampling.
arXiv Detail & Related papers (2025-04-17T15:41:48Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - A Deep Unrolling Model with Hybrid Optimization Structure for Hyperspectral Image Deconvolution [50.13564338607482]
We propose a novel optimization framework for the hyperspectral deconvolution problem, called DeepMix.<n>It consists of three distinct modules, namely, a data consistency module, a module that enforces the effect of the handcrafted regularizers, and a denoising module.<n>This work proposes a context aware denoising module designed to sustain the advancements achieved by the cooperative efforts of the other modules.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer [45.47667026025716]
We propose a novel, robust and accelerated iteration that relies on two key elements.
The convergence and stability of the obtained method, referred to as NAG-GS, are first studied extensively.
We show that NAG-arity is competitive with state-the-art methods such as momentum SGD with weight decay and AdamW for the training of machine learning models.
arXiv Detail & Related papers (2022-09-29T16:54:53Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.