Interpretable Time-series Classification on Few-shot Samples
- URL: http://arxiv.org/abs/2006.02031v2
- Date: Thu, 2 Jul 2020 06:49:53 GMT
- Title: Interpretable Time-series Classification on Few-shot Samples
- Authors: Wensi Tang, Lu Liu, Guodong Long
- Abstract summary: This paper proposes an interpretable neural-based framework, namely textitDual Prototypical Shapelet Networks (DPSN) for few-shot time-series classification.
DPSN interprets the model from dual granularity: 1) global overview using representative time series samples, and 2) local highlights using discriminative shapelets.
We have derived 18 few-shot TSC datasets from public benchmark datasets and evaluated the proposed method by comparing with baselines.
- Score: 27.05851877375113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent few-shot learning works focus on training a model with prior
meta-knowledge to fast adapt to new tasks with unseen classes and samples.
However, conventional time-series classification algorithms fail to tackle the
few-shot scenario. Existing few-shot learning methods are proposed to tackle
image or text data, and most of them are neural-based models that lack
interpretability. This paper proposes an interpretable neural-based framework,
namely \textit{Dual Prototypical Shapelet Networks (DPSN)} for few-shot
time-series classification, which not only trains a neural network-based model
but also interprets the model from dual granularity: 1) global overview using
representative time series samples, and 2) local highlights using
discriminative shapelets. In particular, the generated dual prototypical
shapelets consist of representative samples that can mostly demonstrate the
overall shapes of all samples in the class and discriminative partial-length
shapelets that can be used to distinguish different classes. We have derived 18
few-shot TSC datasets from public benchmark datasets and evaluated the proposed
method by comparing with baselines. The DPSN framework outperforms
state-of-the-art time-series classification methods, especially when training
with limited amounts of data. Several case studies have been given to
demonstrate the interpret ability of our model.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Pre-Trained Vision-Language Models as Partial Annotators [40.89255396643592]
Pre-trained vision-language models learn massive data to model unified representations of images and natural languages.
In this paper, we investigate a novel "pre-trained annotating - weakly-supervised learning" paradigm for pre-trained model application and experiment on image classification tasks.
arXiv Detail & Related papers (2024-05-23T17:17:27Z) - Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Dual-View Data Hallucination with Semantic Relation Guidance for Few-Shot Image Recognition [49.26065739704278]
We propose a framework that exploits semantic relations to guide dual-view data hallucination for few-shot image recognition.
An instance-view data hallucination module hallucinates each sample of a novel class to generate new data.
A prototype-view data hallucination module exploits semantic-aware measure to estimate the prototype of a novel class.
arXiv Detail & Related papers (2024-01-13T12:32:29Z) - Generating Representative Samples for Few-Shot Classification [8.62483598990205]
Few-shot learning aims to learn new categories with a few visual samples per class.
Few-shot class representations are often biased due to data scarcity.
We generate visual samples based on semantic embeddings using a conditional variational autoencoder model.
arXiv Detail & Related papers (2022-05-05T20:58:33Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.