Fuzzy Cognitive Maps and Hidden Markov Models: Comparative Analysis of
Efficiency within the Confines of the Time Series Classification Task
- URL: http://arxiv.org/abs/2204.13455v1
- Date: Thu, 28 Apr 2022 12:41:05 GMT
- Title: Fuzzy Cognitive Maps and Hidden Markov Models: Comparative Analysis of
Efficiency within the Confines of the Time Series Classification Task
- Authors: Jakub Micha{\l} Bilski and Agnieszka Jastrz\k{e}bska
- Abstract summary: We explore the application of Hidden Markov Model (HMM) for time series classification.
We identify four models, HMM NN (HMM, one per series), HMM 1C (HMM, one per class), FCM NN, and FCM 1C are then studied in a series of experiments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series classification is one of the very popular machine learning tasks.
In this paper, we explore the application of Hidden Markov Model (HMM) for time
series classification. We distinguish between two modes of HMM application. The
first, in which a single model is built for each class. The second, in which
one HMM is built for each time series. We then transfer both approaches for
classifier construction to the domain of Fuzzy Cognitive Maps. The identified
four models, HMM NN (HMM, one per series), HMM 1C (HMM, one per class), FCM NN,
and FCM 1C are then studied in a series of experiments. We compare the
performance of different models and investigate the impact of their
hyperparameters on the time series classification accuracy. The empirical
evaluation shows a clear advantage of the one-model-per-series approach. The
results show that the choice between HMM and FCM should be dataset-dependent.
Related papers
- Pyramidal Hidden Markov Model For Multivariate Time Series Forecasting [0.0]
The Hidden Markov Model (HMM) can predict the future value of a time series based on its current and previous values.
We propose a Pyramidal Hidden Markov Model (PHMM) that can capture multiple multistep states.
arXiv Detail & Related papers (2023-10-22T16:17:24Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Learning Circular Hidden Quantum Markov Models: A Tensor Network
Approach [34.77250498401055]
We show that c-HQMMs are equivalent to a constrained tensor network.
This equivalence enables us to provide an efficient learning model for c-HQMMs.
The proposed learning approach is evaluated on six real datasets.
arXiv Detail & Related papers (2021-10-29T23:09:31Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Normalizing Flow based Hidden Markov Models for Classification of Speech
Phones with Explainability [25.543231171094384]
In pursuit of explainability, we develop generative models for sequential data.
We combine modern neural networks (normalizing flows) and traditional generative models (hidden Markov models - HMMs)
The proposed generative models can compute likelihood of a data and hence directly suitable for maximum-likelihood (ML) classification approach.
arXiv Detail & Related papers (2021-07-01T20:10:55Z) - Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis
of Head and Prompt Tuning [66.44344616836158]
We propose an analysis framework that links the pretraining and downstream tasks with an underlying latent variable generative model of text.
We show that 1) under certain non-degeneracy conditions on the HMM, simple classification heads can solve the downstream task, 2) prompt tuning obtains downstream guarantees with weaker non-degeneracy conditions, and 3) our recovery guarantees for the memory-augmented HMM are stronger than for the vanilla HMM.
arXiv Detail & Related papers (2021-06-17T03:31:47Z) - Equivalence of Segmental and Neural Transducer Modeling: A Proof of
Concept [56.46135010588918]
We prove that the widely used class of RNN-Transducer models and segmental models (direct HMM) are equivalent.
It is shown that blank probabilities translate into segment length probabilities and vice versa.
arXiv Detail & Related papers (2021-04-13T11:20:48Z) - Robust Classification using Hidden Markov Models and Mixtures of
Normalizing Flows [25.543231171094384]
We use a generative model that combines the state transitions of a hidden Markov model (HMM) and the neural network based probability distributions for the hidden states of the HMM.
We verify the improved robustness of NMM-HMM classifiers in an application to speech recognition.
arXiv Detail & Related papers (2021-02-15T00:40:30Z) - Indoor Group Activity Recognition using Multi-Layered HMMs [0.0]
Group Activities (GA) based on imagery data processing have significant applications in surveillance systems.
We propose Ontology GAR with a proper inference model that is capable of identifying and classifying a sequence of events in group activities.
A multi-layered Markov Model (HMM) is proposed to recognize different levels of abstract observations.
arXiv Detail & Related papers (2021-01-23T22:02:12Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.