On the Design of Expressive and Trainable Pulse-based Quantum Machine Learning Models
- URL: http://arxiv.org/abs/2508.05559v1
- Date: Thu, 07 Aug 2025 16:40:09 GMT
- Title: On the Design of Expressive and Trainable Pulse-based Quantum Machine Learning Models
- Authors: Han-Xiao Tao, Xin Wang, Re-Bing Wu,
- Abstract summary: Pulse-based Quantum Machine Learning (QML) has emerged as a novel paradigm in quantum artificial intelligence.<n>For practical applications, pulse-based models must be both expressive and trainable.<n>This paper investigates the requirements for pulse-based QML models to be expressive while preserving trainability.
- Score: 4.852613028421959
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pulse-based Quantum Machine Learning (QML) has emerged as a novel paradigm in quantum artificial intelligence due to its exceptional hardware efficiency. For practical applications, pulse-based models must be both expressive and trainable. Previous studies suggest that pulse-based models under dynamic symmetry can be effectively trained, thanks to a favorable loss landscape that has no barren plateaus. However, the resulting uncontrollability may compromise expressivity when the model is inadequately designed. This paper investigates the requirements for pulse-based QML models to be expressive while preserving trainability. We present a necessary condition pertaining to the system's initial state, the measurement observable, and the underlying dynamical symmetry Lie algebra, supported by numerical simulations. Our findings establish a framework for designing practical pulse-based QML models that balance expressivity and trainability.
Related papers
- Higher order quantum reservoir computing for non-intrusive reduced-order models [0.0]
Quantum reservoir computing technique (QRC) is a hybrid quantum-classical framework employing an ensemble of interconnected small quantum systems.
We show that QRC is able to predict complex nonlinear dynamical systems in a stable and accurate manner.
arXiv Detail & Related papers (2024-07-31T13:37:04Z) - On the Role of Controllability in Pulse-based Quantum Machine Learning Models [0.0]
We show that the trade-off is closely related to the controllability of the underlying pulse-based models.
We show that increasing dimensionality enhances expressivity but avoids barren plateaus if the model is designed with limited controllability on a submanifold.
arXiv Detail & Related papers (2024-05-15T07:02:41Z) - Unleashing the Expressive Power of Pulse-Based Quantum Neural Networks [0.46085106405479537]
Quantum machine learning (QML) based on Noisy Intermediate-Scale Quantum (NISQ) devices hinges on the optimal utilization of limited quantum resources.
gate-based QML models are user-friendly for software engineers.
pulse-based models enable the construction of "infinitely" deep quantum neural networks within the same time.
arXiv Detail & Related papers (2024-02-05T10:47:46Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - End-to-End Reinforcement Learning of Koopman Models for Economic Nonlinear Model Predictive Control [45.84205238554709]
We present a method for reinforcement learning of Koopman surrogate models for optimal performance as part of (e)NMPC.
We show that the end-to-end trained models outperform those trained using system identification in (e)NMPC.
arXiv Detail & Related papers (2023-08-03T10:21:53Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - Learning Stochastic Dynamics with Statistics-Informed Neural Network [0.4297070083645049]
We introduce a machine-learning framework named statistics-informed neural network (SINN) for learning dynamics from data.
We devise mechanisms for training the neural network model to reproduce the correct emphstatistical behavior of a target process.
We show that the obtained reduced-order model can be trained on temporally coarse-grained data and hence is well suited for rare-event simulations.
arXiv Detail & Related papers (2022-02-24T18:21:01Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Prediction with Approximated Gaussian Process Dynamical Models [7.678864239473703]
We present approximated GPDMs which are Markov and analyze their control theoretical properties.
The outcomes are illustrated with numerical examples that show the power of the approximated models.
arXiv Detail & Related papers (2020-06-25T16:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.