QCPINN: Quantum-Classical Physics-Informed Neural Networks for Solving PDEs
- URL: http://arxiv.org/abs/2503.16678v4
- Date: Thu, 10 Apr 2025 20:26:10 GMT
- Title: QCPINN: Quantum-Classical Physics-Informed Neural Networks for Solving PDEs
- Authors: Afrah Farea, Saiful Khan, Mustafa Serdar Celebi,
- Abstract summary: Physics-informed neural networks (PINNs) have emerged as promising methods for solving partial differential equations (PDEs)<n>We present a quantum-classical physics-informed neural network (QCPINN) that combines quantum and classical components.<n>QCPINN achieves stable convergence and comparable accuracy, while requiring approximately 10% of the trainable parameters used in classical approaches.
- Score: 0.70224924046445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-informed neural networks (PINNs) have emerged as promising methods for solving partial differential equations (PDEs) by embedding physical laws within neural architectures. However, these classical approaches often require a large number of parameters to achieve reasonable accuracy, particularly for complex PDEs. In this paper, we present a quantum-classical physics-informed neural network (QCPINN) that combines quantum and classical components, allowing us to solve PDEs with significantly fewer parameters while maintaining comparable accuracy and convergence to classical PINNs. We systematically evaluated two quantum circuit architectures across various configurations on five benchmark PDEs to identify optimal QCPINN designs. Our results demonstrate that the QCPINN achieves stable convergence and comparable accuracy, while requiring approximately 10% of the trainable parameters used in classical approaches. It also results in a 40% reduction in the relative error L2 for the convection-diffusion equation. These findings demonstrate the potential of parameter efficiency as a measurable quantum advantage in physics-informed machine learning, significantly reducing model complexity while preserving solution quality. This approach presents a promising solution to the computational challenges associated with solving PDEs.
Related papers
- PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh Representations [5.4087282763977855]
We propose Physics-Informed Gaussians (PIGs), which combine feature embeddings using Gaussian functions with a lightweight neural network.
Our approach uses trainable parameters for the mean and variance of each Gaussian, allowing for dynamic adjustment of their positions and shapes during training.
Experimental results show the competitive performance of our model across various PDEs, demonstrating its potential as a robust tool for solving complex PDEs.
arXiv Detail & Related papers (2024-12-08T16:58:29Z) - SPIKANs: Separable Physics-Informed Kolmogorov-Arnold Networks [0.9999629695552196]
Physics-Informed Neural Networks (PINNs) have emerged as a promising method for solving partial differential equations (PDEs)
We introduce Separable Physics-Informed Kolmogorov-Arnold Networks (SPIKANs)
This novel architecture applies the principle of separation of variables to PIKANs, decomposing the problem such that each dimension is handled by an individual KAN.
arXiv Detail & Related papers (2024-11-09T21:10:23Z) - General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - Parameterized Physics-informed Neural Networks for Parameterized PDEs [24.926311700375948]
In this paper, we propose a novel extension, parameterized physics-informed neural networks (PINNs)
PINNs enable modeling the solutions of parameterized partial differential equations (PDEs) via explicitly encoding a latent representation of PDE parameters.
We demonstrate that P$2$INNs outperform the baselines both in accuracy and parameter efficiency on benchmark 1D and 2D parameterized PDEs.
arXiv Detail & Related papers (2024-08-18T11:58:22Z) - A Hybrid Quantum-Classical Physics-Informed Neural Network Architecture for Solving Quantum Optimal Control Problems [1.4811951486536687]
The study showcases an innovative approach to optimizing quantum state manipulations.
The proposed hybrid model effectively applies machine learning techniques to solve optimal control problems.
This is illustrated through the design and implementation of a hybrid PINN network to solve a quantum state transition problem.
arXiv Detail & Related papers (2024-04-23T13:22:22Z) - Variational quantum eigensolver with linear depth problem-inspired
ansatz for solving portfolio optimization in finance [7.501820750179541]
This paper introduces the variational quantum eigensolver (VQE) to solve portfolio optimization problems in finance.
We implement the HDC experiments on the superconducting quantum computer Wu Kong with up to 55 qubits.
The HDC scheme shows great potential for achieving quantum advantage in the NISQ era.
arXiv Detail & Related papers (2024-03-07T07:45:47Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Quantum Fourier Networks for Solving Parametric PDEs [4.409836695738518]
Recently, a deep learning architecture called Fourier Neural Operator (FNO) proved to be capable of learning solutions of given PDE families for any initial conditions as input.
We propose quantum algorithms inspired by the classical FNO, which result in time complexity logarithmic in the number of evaluations.
arXiv Detail & Related papers (2023-06-27T12:21:02Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Quantum HyperNetworks: Training Binary Neural Networks in Quantum
Superposition [16.1356415877484]
We introduce quantum hypernetworks as a mechanism to train binary neural networks on quantum computers.
We show that our approach effectively finds optimal parameters, hyperparameters and architectural choices with high probability on classification problems.
Our unified approach provides an immense scope for other applications in the field of machine learning.
arXiv Detail & Related papers (2023-01-19T20:06:48Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the
Race to Practical Quantum Advantage [43.3054117987806]
We introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for quantum circuits.
We show this method significantly improves the trainability and performance of PQCs on a variety of problems.
By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing.
arXiv Detail & Related papers (2022-08-29T15:24:03Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.