Learning Circular Hidden Quantum Markov Models: A Tensor Network
Approach
- URL: http://arxiv.org/abs/2111.01536v1
- Date: Fri, 29 Oct 2021 23:09:31 GMT
- Title: Learning Circular Hidden Quantum Markov Models: A Tensor Network
Approach
- Authors: Mohammad Ali Javidian, Vaneet Aggarwal, Zubin Jacob
- Abstract summary: We show that c-HQMMs are equivalent to a constrained tensor network.
This equivalence enables us to provide an efficient learning model for c-HQMMs.
The proposed learning approach is evaluated on six real datasets.
- Score: 34.77250498401055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose circular Hidden Quantum Markov Models (c-HQMMs),
which can be applied for modeling temporal data in quantum datasets (with
classical datasets as a special case). We show that c-HQMMs are equivalent to a
constrained tensor network (more precisely, circular Local Purified State with
positive-semidefinite decomposition) model. This equivalence enables us to
provide an efficient learning model for c-HQMMs. The proposed learning approach
is evaluated on six real datasets and demonstrates the advantage of c-HQMMs on
multiple datasets as compared to HQMMs, circular HMMs, and HMMs.
Related papers
- A new quantum machine learning algorithm: split hidden quantum Markov model inspired by quantum conditional master equation [14.262911696419934]
We introduce the split HQMM (SHQMM) for implementing the hidden quantum Markov process.
Experimental results suggest our model outperforms previous models in terms of scope of applications and robustness.
arXiv Detail & Related papers (2023-07-17T16:55:26Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Fuzzy Cognitive Maps and Hidden Markov Models: Comparative Analysis of
Efficiency within the Confines of the Time Series Classification Task [0.0]
We explore the application of Hidden Markov Model (HMM) for time series classification.
We identify four models, HMM NN (HMM, one per series), HMM 1C (HMM, one per class), FCM NN, and FCM 1C are then studied in a series of experiments.
arXiv Detail & Related papers (2022-04-28T12:41:05Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Robust Classification using Hidden Markov Models and Mixtures of
Normalizing Flows [25.543231171094384]
We use a generative model that combines the state transitions of a hidden Markov model (HMM) and the neural network based probability distributions for the hidden states of the HMM.
We verify the improved robustness of NMM-HMM classifiers in an application to speech recognition.
arXiv Detail & Related papers (2021-02-15T00:40:30Z) - DenseHMM: Learning Hidden Markov Models by Learning Dense
Representations [0.0]
We propose a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables.
Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization.
The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.
arXiv Detail & Related papers (2020-12-17T17:48:27Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.