Learning Discriminative Prototypes with Dynamic Time Warping
- URL: http://arxiv.org/abs/2103.09458v1
- Date: Wed, 17 Mar 2021 06:11:11 GMT
- Title: Learning Discriminative Prototypes with Dynamic Time Warping
- Authors: Xiaobin Chang, Frederick Tung, Greg Mori
- Abstract summary: We propose Discriminative Prototype DTW (DP-DTW), a novel method to learn class-specific discriminative prototypes for temporal recognition tasks.
DP-DTW shows superior performance compared to conventional DTWs on time series classification benchmarks.
- Score: 49.03785686097989
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Dynamic Time Warping (DTW) is widely used for temporal data processing.
However, existing methods can neither learn the discriminative prototypes of
different classes nor exploit such prototypes for further analysis. We propose
Discriminative Prototype DTW (DP-DTW), a novel method to learn class-specific
discriminative prototypes for temporal recognition tasks. DP-DTW shows superior
performance compared to conventional DTWs on time series classification
benchmarks. Combined with end-to-end deep learning, DP-DTW can handle
challenging weakly supervised action segmentation problems and achieves state
of the art results on standard benchmarks. Moreover, detailed reasoning on the
input video is enabled by the learned action prototypes. Specifically, an
action-based video summarization can be obtained by aligning the input sequence
with action prototypes.
Related papers
- Deep Attentive Time Warping [22.411355064531143]
We propose a neural network model for task-adaptive time warping.
We use the attention model, called the bipartite attention model, to develop an explicit time warping mechanism.
Unlike other learnable models using DTW for warping, our model predicts all local correspondences between two time series.
arXiv Detail & Related papers (2023-09-13T04:49:49Z) - A Prototypical Semantic Decoupling Method via Joint Contrastive Learning
for Few-Shot Name Entity Recognition [24.916377682689955]
Few-shot named entity recognition (NER) aims at identifying named entities based on only few labeled instances.
We propose a Prototypical Semantic Decoupling method via joint Contrastive learning (PSDC) for few-shot NER.
Experimental results on two few-shot NER benchmarks demonstrate that PSDC consistently outperforms the previous SOTA methods in terms of overall performance.
arXiv Detail & Related papers (2023-02-27T09:20:00Z) - Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition [40.329190454146996]
MultimOdal PRototype-ENhanced Network (MORN) uses semantic information of label texts as multimodal information to enhance prototypes.
We conduct extensive experiments on four popular few-shot action recognition datasets.
arXiv Detail & Related papers (2022-12-09T14:24:39Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Fine-grained Temporal Contrastive Learning for Weakly-supervised
Temporal Action Localization [87.47977407022492]
This paper argues that learning by contextually comparing sequence-to-sequence distinctions offers an essential inductive bias in weakly-supervised action localization.
Under a differentiable dynamic programming formulation, two complementary contrastive objectives are designed, including Fine-grained Sequence Distance (FSD) contrasting and Longest Common Subsequence (LCS) contrasting.
Our method achieves state-of-the-art performance on two popular benchmarks.
arXiv Detail & Related papers (2022-03-31T05:13:50Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Prototypical Contrast and Reverse Prediction: Unsupervised Skeleton
Based Action Recognition [12.463955174384457]
We propose a novel framework named Prototypical Contrast and Reverse Prediction (PCRP)
PCRP creates reverse sequential prediction to learn low-level information and high-level pattern.
It also devises action prototypes to implicitly encode semantic similarity shared among sequences.
arXiv Detail & Related papers (2020-11-14T08:04:23Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.