Semi-supervised sequence classification through change point detection
- URL: http://arxiv.org/abs/2009.11829v2
- Date: Tue, 6 Oct 2020 15:50:39 GMT
- Title: Semi-supervised sequence classification through change point detection
- Authors: Nauman Ahad, Mark A. Davenport
- Abstract summary: We propose a novel framework for semi-supervised learning in such contexts.
In an unsupervised manner, change point detection methods can be used to identify points within a sequence corresponding to likely class changes.
We show that change points provide examples of similar/dissimilar pairs of sequences which, when coupled with labeled, can be used in a semi-supervised classification setting.
- Score: 22.14406516031776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential sensor data is generated in a wide variety of practical
applications. A fundamental challenge involves learning effective classifiers
for such sequential data. While deep learning has led to impressive performance
gains in recent years in domains such as speech, this has relied on the
availability of large datasets of sequences with high-quality labels. In many
applications, however, the associated class labels are often extremely limited,
with precise labelling/segmentation being too expensive to perform at a high
volume. However, large amounts of unlabeled data may still be available. In
this paper we propose a novel framework for semi-supervised learning in such
contexts. In an unsupervised manner, change point detection methods can be used
to identify points within a sequence corresponding to likely class changes. We
show that change points provide examples of similar/dissimilar pairs of
sequences which, when coupled with labeled, can be used in a semi-supervised
classification setting. Leveraging the change points and labeled data, we form
examples of similar/dissimilar sequences to train a neural network to learn
improved representations for classification. We provide extensive synthetic
simulations and show that the learned representations are superior to those
learned through an autoencoder and obtain improved results on both simulated
and real-world human activity recognition datasets.
Related papers
- Class-Level Logit Perturbation [0.0]
Feature perturbation and label perturbation have been proven to be useful in various deep learning approaches.
New methodologies are proposed to explicitly learn to perturb logits for both single-label and multi-label classification tasks.
As it only perturbs on logit, it can be used as a plug-in to fuse with any existing classification algorithms.
arXiv Detail & Related papers (2022-09-13T00:49:32Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Model-Change Active Learning in Graph-Based Semi-Supervised Learning [5.174023161939957]
"Model Change" active learning quantifies the resulting change by introducing the additional label(s)
We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.
arXiv Detail & Related papers (2021-10-14T21:47:10Z) - Semi-supervised Long-tailed Recognition using Alternate Sampling [95.93760490301395]
Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes.
We propose a new recognition setting, namely semi-supervised long-tailed recognition.
We demonstrate significant accuracy improvements over other competitive methods on two datasets.
arXiv Detail & Related papers (2021-05-01T00:43:38Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z) - Few-shot Learning for Multi-label Intent Detection [59.66787898744991]
State-of-the-art work estimates label-instance relevance scores and uses a threshold to select multiple associated intent labels.
Experiments on two datasets show that the proposed model significantly outperforms strong baselines in both one-shot and five-shot settings.
arXiv Detail & Related papers (2020-10-11T14:42:18Z) - Multi-label Stream Classification with Self-Organizing Maps [2.055054374525828]
We propose an online incremental method based on self-organizing maps for multi-label stream classification with infinitely delayed labels.
In the classification phase, we use a k-nearest neighbors strategy to compute the winning neurons in the maps.
We predict labels for each instance using the Bayes rule and the outputs of each neuron, adapting the probabilities and conditional probabilities of the classes in the stream.
arXiv Detail & Related papers (2020-04-20T15:52:38Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.