Learning Class Regularized Features for Action Recognition
- URL: http://arxiv.org/abs/2002.02651v1
- Date: Fri, 7 Feb 2020 07:27:49 GMT
- Title: Learning Class Regularized Features for Action Recognition
- Authors: Alexandros Stergiou, Ronald Poppe, and Remco C. Veltkamp
- Abstract summary: We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
- Score: 68.90994813947405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training Deep Convolutional Neural Networks (CNNs) is based on the notion of
using multiple kernels and non-linearities in their subsequent activations to
extract useful features. The kernels are used as general feature extractors
without specific correspondence to the target class. As a result, the extracted
features do not correspond to specific classes. Subtle differences between
similar classes are modeled in the same way as large differences between
dissimilar classes. To overcome the class-agnostic use of kernels in CNNs, we
introduce a novel method named Class Regularization that performs class-based
regularization of layer activations. We demonstrate that this not only improves
feature search during training, but also allows an explicit assignment of
features per class during each stage of the feature extraction process. We show
that using Class Regularization blocks in state-of-the-art CNN architectures
for action recognition leads to systematic improvement gains of 1.8%, 1.2% and
1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
Related papers
- DICS: Find Domain-Invariant and Class-Specific Features for Out-of-Distribution Generalization [26.382349137191547]
In vision tasks, both domain-related and class-shared features act as confounders that hinder generalization.
We propose a DICS model to extract Domain-Invariant and Class-Specific features.
DICS effectively identifies the key features of each class in target domains.
arXiv Detail & Related papers (2024-09-13T06:20:21Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning [53.32576252950481]
Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data.
In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks.
arXiv Detail & Related papers (2023-05-19T07:39:17Z) - Class-Specific Attention (CSA) for Time-Series Classification [8.390973438687777]
We propose a novel class-specific attention (CSA) module to capture significant class-specific features and improve the overall classification performance of time series.
An NN model embedded with the CSA module can improve the base model in most cases and the accuracy improvement can be up to 42%.
Our statistical analysis show that the performance of an NN model embedding the CSA module is better than the base NN model on 67% of MTS and 80% of UTS test cases.
arXiv Detail & Related papers (2022-11-19T07:51:51Z) - FRANS: Automatic Feature Extraction for Time Series Forecasting [2.3226893628361682]
We develop an autonomous Feature Retrieving Autoregressive Network for Static features that does not require domain knowledge.
Our results show that our features lead to improvement in accuracy in most situations.
arXiv Detail & Related papers (2022-09-15T03:14:59Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - GAN for Vision, KG for Relation: a Two-stage Deep Network for Zero-shot
Action Recognition [33.23662792742078]
We propose a two-stage deep neural network for zero-shot action recognition.
In the sampling stage, we utilize a generative adversarial networks (GAN) trained by action features and word vectors of seen classes.
In the classification stage, we construct a knowledge graph based on the relationship between word vectors of action classes and related objects.
arXiv Detail & Related papers (2021-05-25T09:34:42Z) - Conditional Variational Capsule Network for Open Set Recognition [64.18600886936557]
In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
arXiv Detail & Related papers (2021-04-19T09:39:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.