AggSS: An Aggregated Self-Supervised Approach for Class-Incremental Learning
- URL: http://arxiv.org/abs/2408.04347v1
- Date: Thu, 8 Aug 2024 10:16:02 GMT
- Title: AggSS: An Aggregated Self-Supervised Approach for Class-Incremental Learning
- Authors: Jayateja Kalla, Soma Biswas,
- Abstract summary: This paper investigates the impact of self-supervised learning, specifically image rotations, on various class-incremental learning paradigms.
We observe a shift in the deep neural network's attention towards intrinsic object features as it learns through AggSS strategy.
AggSS serves as a plug-and-play module that can be seamlessly incorporated into any class-incremental learning framework.
- Score: 17.155759991260094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the impact of self-supervised learning, specifically image rotations, on various class-incremental learning paradigms. Here, each image with a predefined rotation is considered as a new class for training. At inference, all image rotation predictions are aggregated for the final prediction, a strategy we term Aggregated Self-Supervision (AggSS). We observe a shift in the deep neural network's attention towards intrinsic object features as it learns through AggSS strategy. This learning approach significantly enhances class-incremental learning by promoting robust feature learning. AggSS serves as a plug-and-play module that can be seamlessly incorporated into any class-incremental learning framework, leveraging its powerful feature learning capabilities to enhance performance across various class-incremental learning approaches. Extensive experiments conducted on standard incremental learning datasets CIFAR-100 and ImageNet-Subset demonstrate the significant role of AggSS in improving performance within these paradigms.
Related papers
- Point Cloud Understanding via Attention-Driven Contrastive Learning [64.65145700121442]
Transformer-based models have advanced point cloud understanding by leveraging self-attention mechanisms.
PointACL is an attention-driven contrastive learning framework designed to address these limitations.
Our method employs an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions.
arXiv Detail & Related papers (2024-11-22T05:41:00Z) - DSReLU: A Novel Dynamic Slope Function for Superior Model Training [2.2057562301812674]
The rationale behind this approach is to overcome limitations associated with traditional activation functions, such as ReLU.
Evaluated on the Mini-ImageNet, CIFAR-100, and MIT-BIH datasets, our method demonstrated improvements in classification metrics and generalization capabilities.
arXiv Detail & Related papers (2024-08-17T10:01:30Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Self-Supervised Representation Learning with Meta Comprehensive
Regularization [11.387994024747842]
We introduce a module called CompMod with Meta Comprehensive Regularization (MCR), embedded into existing self-supervised frameworks.
We update our proposed model through a bi-level optimization mechanism, enabling it to capture comprehensive features.
We provide theoretical support for our proposed method from information theory and causal counterfactual perspective.
arXiv Detail & Related papers (2024-03-03T15:53:48Z) - Robust Feature Learning and Global Variance-Driven Classifier Alignment
for Long-Tail Class Incremental Learning [20.267257778779992]
This paper introduces a two-stage framework designed to enhance long-tail class incremental learning.
We address the challenge posed by the under-representation of tail classes in long-tail class incremental learning.
The proposed framework can seamlessly integrate as a module with any class incremental learning method.
arXiv Detail & Related papers (2023-11-02T13:28:53Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Gated Self-supervised Learning For Improving Supervised Learning [1.784933900656067]
We propose a novel approach to self-supervised learning for image classification using several localizable augmentations with the combination of the gating method.
Our approach uses flip and shuffle channel augmentations in addition to the rotation, allowing the model to learn rich features from the data.
arXiv Detail & Related papers (2023-01-14T09:32:12Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.