Gated Self-supervised Learning For Improving Supervised Learning
- URL: http://arxiv.org/abs/2301.05865v1
- Date: Sat, 14 Jan 2023 09:32:12 GMT
- Title: Gated Self-supervised Learning For Improving Supervised Learning
- Authors: Erland Hilman Fuadi, Aristo Renaldo Ruslim, Putu Wahyu Kusuma
Wardhana, Novanto Yudistira
- Abstract summary: We propose a novel approach to self-supervised learning for image classification using several localizable augmentations with the combination of the gating method.
Our approach uses flip and shuffle channel augmentations in addition to the rotation, allowing the model to learn rich features from the data.
- Score: 1.784933900656067
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In past research on self-supervised learning for image classification, the
use of rotation as an augmentation has been common. However, relying solely on
rotation as a self-supervised transformation can limit the ability of the model
to learn rich features from the data. In this paper, we propose a novel
approach to self-supervised learning for image classification using several
localizable augmentations with the combination of the gating method. Our
approach uses flip and shuffle channel augmentations in addition to the
rotation, allowing the model to learn rich features from the data. Furthermore,
the gated mixture network is used to weigh the effects of each self-supervised
learning on the loss function, allowing the model to focus on the most relevant
transformations for classification.
Related papers
- AggSS: An Aggregated Self-Supervised Approach for Class-Incremental Learning [17.155759991260094]
This paper investigates the impact of self-supervised learning, specifically image rotations, on various class-incremental learning paradigms.
We observe a shift in the deep neural network's attention towards intrinsic object features as it learns through AggSS strategy.
AggSS serves as a plug-and-play module that can be seamlessly incorporated into any class-incremental learning framework.
arXiv Detail & Related papers (2024-08-08T10:16:02Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Class-Incremental Learning using Diffusion Model for Distillation and
Replay [5.0977390531431634]
Class-incremental learning aims to learn new classes in an incremental fashion without forgetting the previously learned ones.
We propose the use of a pretrained Stable Diffusion model as a source of additional data for class-incremental learning.
arXiv Detail & Related papers (2023-06-30T11:23:49Z) - Learnability Lock: Authorized Learnability Control Through Adversarial
Invertible Transformations [9.868558660605993]
This paper introduces and investigates a new concept called "learnability lock" for controlling the model's learnability on a specific dataset with a special key.
We propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to slightly modify data samples so that they become "unlearnable" by machine learning models with negligible loss of visual features.
This ensures that the learnability can be easily restored with a simple inverse transformation while remaining difficult to be detected or reverse-engineered.
arXiv Detail & Related papers (2022-02-03T17:38:11Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z) - PointAugment: an Auto-Augmentation Framework for Point Cloud
Classification [105.27565020399]
PointAugment is a new auto-augmentation framework that automatically optimize and augments point cloud samples to enrich the data diversity when we train a classification network.
We formulate a learnable point augmentation function with a shape-wise transformation and a point-wise displacement, and carefully design loss functions to adopt the augmented samples.
arXiv Detail & Related papers (2020-02-25T14:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.