Quantum Reservoir Computing with Neutral Atoms on a Small, Complex, Medical Dataset
- URL: http://arxiv.org/abs/2602.14641v1
- Date: Mon, 16 Feb 2026 11:03:31 GMT
- Title: Quantum Reservoir Computing with Neutral Atoms on a Small, Complex, Medical Dataset
- Authors: Luke Antoncich, Yuben Moodley, Ugo Varetto, Jingbo Wang, Jonathan Wurtz, Jing Chen, Pascal Jahan Elahi, Casey R. Myers,
- Abstract summary: We investigate quantum reservoir computing (QRC) using both noiseless emulation and hardware execution on the Rydberg processor textitAquila.<n>We find that models trained on emulated quantum features achieve mean test accuracies comparable to those trained on classical features, but have higher training accuracies and greater variability over data splits, consistent with overfitting.<n>This combination of improved accuracy and increased stability is suggestive of a regularising effect induced by hardware execution.
- Score: 5.786250122999547
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Biomarker-based prediction of clinical outcomes is challenging due to nonlinear relationships, correlated features, and the limited size of many medical datasets. Classical machine-learning methods can struggle under these conditions, motivating the search for alternatives. In this work, we investigate quantum reservoir computing (QRC), using both noiseless emulation and hardware execution on the neutral-atom Rydberg processor \textit{Aquila}. We evaluate performance with six classical machine-learning models and use SHAP to generate feature subsets. We find that models trained on emulated quantum features achieve mean test accuracies comparable to those trained on classical features, but have higher training accuracies and greater variability over data splits, consistent with overfitting. When comparing hardware execution of QRC to noiseless emulation, the models are more robust over different data splits and often exhibit statistically significant improvements in mean test accuracy. This combination of improved accuracy and increased stability is suggestive of a regularising effect induced by hardware execution. To investigate the origin of this behaviour, we examine the statistical differences between hardware and emulated quantum feature distributions. We find that hardware execution applies a structured, time-dependent transformation characterised by compression toward the mean and a progressive reduction in mutual information relative to emulation.
Related papers
- Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - Deep Learning Approaches to Quantum Error Mitigation [0.2504492508782606]
We present a systematic investigation of deep learning methods applied to quantum error mitigation of noisy output probability distributions.<n>We identify sequence-to-sequence, attention-based models as the most effective on our datasets.<n>Across several different circuit depths, our approach outperforms other baseline error mitigation techniques.
arXiv Detail & Related papers (2026-01-20T18:40:22Z) - Assessing the performance of correlation-based multi-fidelity neural emulators [0.0]
Multi-fidelity neural emulators are designed to learn the input-to-output mapping by integrating limited high-fidelity data with abundant low-fidelity model solutions.<n>This study investigates the performance of multi-fidelity neural emulators, neural networks designed to learn the input-to-output mapping by integrating limited high-fidelity data with abundant low-fidelity model solutions.
arXiv Detail & Related papers (2025-12-02T15:31:21Z) - Data-Efficient Quantum Noise Modeling via Machine Learning [0.3279176777295314]
We introduce a data-efficient, machine learning-based framework to construct accurate, parameterized noise models for superconducting quantum processors.<n>We show that a model trained exclusively on small-scale circuits accurately predicts the behavior of larger validation circuits.
arXiv Detail & Related papers (2025-09-16T10:30:28Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.<n>We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.<n>We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Robust Confinement State Classification with Uncertainty Quantification through Ensembled Data-Driven Methods [39.27649013012046]
We develop methods for confinement state classification with uncertainty quantification and model robustness.<n>We focus on off-line analysis for TCV discharges, distinguishing L-mode, H-mode, and an in-between dithering phase (D)<n>A dataset of 302 TCV discharges is fully labeled, and will be publicly released.
arXiv Detail & Related papers (2025-02-24T18:25:22Z) - Modeling Quantum Machine Learning for Genomic Data Analysis [12.248184406275405]
Quantum Machine Learning (QML) continues to evolve, unlocking new opportunities for diverse applications.<n>We investigate and evaluate the applicability of QML models for binary classification of genome sequence data by employing various feature mapping techniques.<n>We present an open-source, independent Qiskit-based implementation to conduct experiments on a benchmark genomic dataset.
arXiv Detail & Related papers (2025-01-14T15:14:26Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.<n>We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.<n>By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.