Quantum Dimension Reduction of Hidden Markov Models
- URL: http://arxiv.org/abs/2601.16126v1
- Date: Thu, 22 Jan 2026 17:27:52 GMT
- Title: Quantum Dimension Reduction of Hidden Markov Models
- Authors: Rishi Sundar, Thomas Elliott,
- Abstract summary: We introduce a pipeline by which emphany finite, ergodic HMM can be compressed.<n>We demonstrate the method on both a simple toy model, and on a speech-derived HMM trained from data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hidden Markov models (HMMs) are ubiquitous in time-series modelling, with applications ranging from chemical reaction modelling to speech recognition. These HMMs are often large, with high-dimensional memories. A recently-proposed application of quantum technologies is to execute quantum analogues of HMMs. Such quantum HMMs (QHMMs) are strictly more expressive than their classical counterparts, enabling the construction of more parsimonious models of stochastic processes. However, state-of-the-art techniques for QHMM compression, based on tensor networks, are only applicable for a restricted subset of HMMs, where the transitions are deterministic. In this work we introduce a pipeline by which \emph{any} finite, ergodic HMM can be compressed in this manner, providing a route for effective quantum dimension reduction of general HMMs. We demonstrate the method on both a simple toy model, and on a speech-derived HMM trained from data, obtaining favourable memory--accuracy trade-offs compared to classical compression approaches.
Related papers
- A new quantum machine learning algorithm: split hidden quantum Markov model inspired by quantum conditional master equation [14.262911696419934]
We introduce the split HQMM (SHQMM) for implementing the hidden quantum Markov process.
Experimental results suggest our model outperforms previous models in terms of scope of applications and robustness.
arXiv Detail & Related papers (2023-07-17T16:55:26Z) - Hard-normal Example-aware Template Mutual Matching for Industrial Anomaly Detection [78.734927709231]
Anomaly detectors are widely used in industrial manufacturing to detect and localize unknown defects in query images.<n>These detectors are trained on anomaly-free samples and have successfully distinguished anomalies from most normal samples.<n>However, hard-normal examples are scattered and far apart from most normal samples, and thus they are often mistaken for anomalies by existing methods.
arXiv Detail & Related papers (2023-03-28T17:54:56Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Implementation and Learning of Quantum Hidden Markov Models [0.0]
We use the theory of quantum channels and open quantum systems to provide an efficient unitary characterization of a class of generators known as quantum hidden Markov models (QHMMs)<n>We prove that QHMMs are more compact and more expressive definitions of process languages compared to the equivalent classical hidden Markov models (HMMs)<n>We propose two practical learning algorithms for QHMMs.
arXiv Detail & Related papers (2022-12-07T17:25:02Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Memory compression and thermal efficiency of quantum implementations of
non-deterministic hidden Markov models [0.0]
We provide a systematic prescription for constructing quantum implementations of non-deterministic HMMs.
We show that our implementations will both mitigate some of this dissipation, and achieve an advantage in memory compression.
arXiv Detail & Related papers (2021-05-13T13:32:25Z) - Error mitigation and quantum-assisted simulation in the error corrected
regime [77.34726150561087]
A standard approach to quantum computing is based on the idea of promoting a classically simulable and fault-tolerant set of operations.
We show how the addition of noisy magic resources allows one to boost classical quasiprobability simulations of a quantum circuit.
arXiv Detail & Related papers (2021-03-12T20:58:41Z) - Robust Classification using Hidden Markov Models and Mixtures of
Normalizing Flows [25.543231171094384]
We use a generative model that combines the state transitions of a hidden Markov model (HMM) and the neural network based probability distributions for the hidden states of the HMM.
We verify the improved robustness of NMM-HMM classifiers in an application to speech recognition.
arXiv Detail & Related papers (2021-02-15T00:40:30Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning [66.18202188565922]
We propose a communication-efficient decentralized machine learning (ML) algorithm, coined QGADMM (QGADMM)<n>We develop a novel quantization method to adaptively adjust modelization levels and their probabilities, while proving the convergence of QGADMM for convex functions.
arXiv Detail & Related papers (2019-10-23T10:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.