Few-shot time series segmentation using prototype-defined infinite
hidden Markov models
- URL: http://arxiv.org/abs/2102.03885v1
- Date: Sun, 7 Feb 2021 19:02:33 GMT
- Title: Few-shot time series segmentation using prototype-defined infinite
hidden Markov models
- Authors: Yazan Qarout and Yordan P. Raykov and Max A. Little
- Abstract summary: We propose a framework for interpretable, few-shot analysis of non-stationary sequential data based on flexible graphical models.
We show that RBF networks can be efficiently specified via prototypes allowing us to express complex nonstationary patterns.
The utility of the framework is demonstrated on biomedical signal processing applications such as automated seizure detection from EEG data.
- Score: 3.527894538672585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a robust framework for interpretable, few-shot analysis of
non-stationary sequential data based on flexible graphical models to express
the structured distribution of sequential events, using prototype radial basis
function (RBF) neural network emissions. A motivational link is demonstrated
between prototypical neural network architectures for few-shot learning and the
proposed RBF network infinite hidden Markov model (RBF-iHMM). We show that RBF
networks can be efficiently specified via prototypes allowing us to express
complex nonstationary patterns, while hidden Markov models are used to infer
principled high-level Markov dynamics. The utility of the framework is
demonstrated on biomedical signal processing applications such as automated
seizure detection from EEG data where RBF networks achieve state-of-the-art
performance using a fraction of the data needed to train long-short-term memory
variational autoencoders.
Related papers
- DPFAGA-Dynamic Power Flow Analysis and Fault Characteristics: A Graph Attention Neural Network [0.19439126568870457]
We propose the joint graph attention neural network (GAT) and clustering with adaptive neighbors (CAN) for dynamic power flow analysis and fault characteristics.
We then evaluate the proposed framework in the use-case application in smart grid and make a fair comparison to the existing methods.
arXiv Detail & Related papers (2025-03-19T03:17:11Z) - Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Spatial Temporal Graph Convolution with Graph Structure Self-learning
for Early MCI Detection [9.11430195887347]
We propose a spatial temporal graph convolutional network with a novel graph structure self-learning mechanism for EMCI detection.
Results on the Alzheimer's Disease Neuroimaging Initiative database show that our method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-11T12:29:00Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Mutually exciting point process graphs for modelling dynamic networks [0.0]
A new class of models for dynamic networks is proposed, called mutually exciting point process graphs (MEG)
MEG is a scalable network-wide statistical model for point processes with dyadic marks, which can be used for anomaly detection.
The model is tested on simulated graphs and real world computer network datasets, demonstrating excellent performance.
arXiv Detail & Related papers (2021-02-11T10:14:55Z) - Ensembles of Spiking Neural Networks [0.3007949058551534]
This paper demonstrates how to construct ensembles of spiking neural networks producing state-of-the-art results.
We achieve classification accuracies of 98.71%, 100.0%, and 99.09%, on the MNIST, NMNIST and DVS Gesture datasets respectively.
We formalize spiking neural networks as GLM predictors, identifying a suitable representation for their target domain.
arXiv Detail & Related papers (2020-10-15T17:45:18Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - Graph Neural Networks for Leveraging Industrial Equipment Structure: An
application to Remaining Useful Life Estimation [21.297461316329453]
We propose to capture the structure of a complex equipment in the form of a graph, and use graph neural networks (GNNs) to model multi-sensor time-series data.
We observe that the proposed GNN-based RUL estimation model compares favorably to several strong baselines from literature such as those based on RNNs and CNNs.
arXiv Detail & Related papers (2020-06-30T06:38:08Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.