Unsupervised Discriminative Embedding for Sub-Action Learning in Complex
Activities
- URL: http://arxiv.org/abs/2105.00067v1
- Date: Fri, 30 Apr 2021 20:07:27 GMT
- Title: Unsupervised Discriminative Embedding for Sub-Action Learning in Complex
Activities
- Authors: Sirnam Swetha, Hilde Kuehne, Yogesh S Rawat, Mubarak Shah
- Abstract summary: This paper proposes a novel approach for unsupervised sub-action learning in complex activities.
The proposed method maps both visual and temporal representations to a latent space where the sub-actions are learnt discriminatively.
We show that the proposed combination of visual-temporal embedding and discriminative latent concepts allow to learn robust action representations in an unsupervised setting.
- Score: 54.615003524001686
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Action recognition and detection in the context of long untrimmed video
sequences has seen an increased attention from the research community. However,
annotation of complex activities is usually time consuming and challenging in
practice. Therefore, recent works started to tackle the problem of unsupervised
learning of sub-actions in complex activities. This paper proposes a novel
approach for unsupervised sub-action learning in complex activities. The
proposed method maps both visual and temporal representations to a latent space
where the sub-actions are learnt discriminatively in an end-to-end fashion. To
this end, we propose to learn sub-actions as latent concepts and a novel
discriminative latent concept learning (DLCL) module aids in learning
sub-actions. The proposed DLCL module lends on the idea of latent concepts to
learn compact representations in the latent embedding space in an unsupervised
way. The result is a set of latent vectors that can be interpreted as cluster
centers in the embedding space. The latent space itself is formed by a joint
visual and temporal embedding capturing the visual similarity and temporal
ordering of the data. Our joint learning with discriminative latent concept
module is novel which eliminates the need for explicit clustering. We validate
our approach on three benchmark datasets and show that the proposed combination
of visual-temporal embedding and discriminative latent concepts allow to learn
robust action representations in an unsupervised setting.
Related papers
- Learning in Hybrid Active Inference Models [0.8749675983608172]
We present a novel hierarchical hybrid active inference agent in which a high-level discrete active inference planner sits above a low-level continuous active inference controller.
We make use of recent work in recurrent switching linear dynamical systems which implement end-to-end learning of meaningful discrete representations.
We apply our model to the sparse Continuous Mountain Car task, demonstrating fast system identification via enhanced exploration and successful planning.
arXiv Detail & Related papers (2024-09-02T08:41:45Z) - An Information Compensation Framework for Zero-Shot Skeleton-based Action Recognition [49.45660055499103]
Zero-shot human skeleton-based action recognition aims to construct a model that can recognize actions outside the categories seen during training.
Previous research has focused on aligning sequences' visual and semantic spatial distributions.
We introduce a new loss function sampling method to obtain a tight and robust representation.
arXiv Detail & Related papers (2024-06-02T06:53:01Z) - Understanding Distributed Representations of Concepts in Deep Neural
Networks without Supervision [25.449397570387802]
We propose an unsupervised method for discovering distributed representations of concepts by selecting a principal subset of neurons.
Our empirical findings demonstrate that instances with similar neuron activation states tend to share coherent concepts.
It can be utilized to identify unlabeled subclasses within data and to detect the causes of misclassifications.
arXiv Detail & Related papers (2023-12-28T07:33:51Z) - SCD-Net: Spatiotemporal Clues Disentanglement Network for
Self-supervised Skeleton-based Action Recognition [39.99711066167837]
This paper introduces a contrastive learning framework, namely Stemporal Clues Disentanglement Network (SCD-Net)
Specifically, we integrate the sequences with a feature extractor to derive explicit clues from spatial and temporal domains respectively.
We conduct evaluations on the NTU-+D (60&120) PKU-MMDI (&I) datasets, covering various downstream tasks such as action recognition, action retrieval, transfer learning.
arXiv Detail & Related papers (2023-09-11T21:32:13Z) - Cross-Video Contextual Knowledge Exploration and Exploitation for
Ambiguity Reduction in Weakly Supervised Temporal Action Localization [23.94629999419033]
Weakly supervised temporal action localization (WSTAL) aims to localize actions in untrimmed videos using video-level labels.
Our work addresses this from a novel perspective, by exploring and exploiting the cross-video contextual knowledge within the dataset.
Our method outperforms the state-of-the-art methods, and can be easily plugged into other WSTAL methods.
arXiv Detail & Related papers (2023-08-24T07:19:59Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.