Self-supervision of Feature Transformation for Further Improving
Supervised Learning
- URL: http://arxiv.org/abs/2106.04922v1
- Date: Wed, 9 Jun 2021 09:06:33 GMT
- Title: Self-supervision of Feature Transformation for Further Improving
Supervised Learning
- Authors: Zilin Ding, Yuhang Yang, Xuan Cheng, Xiaomin Wang, Ming Liu
- Abstract summary: We find that features in CNNs can be also used for self-supervision.
In our task we discard different particular regions of features, and then train the model to distinguish these different features.
Original labels will be expanded to joint labels via self-supervision of feature transformations.
- Score: 6.508466234920147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning, which benefits from automatically constructing
labels through pre-designed pretext task, has recently been applied for
strengthen supervised learning. Since previous self-supervised pretext tasks
are based on input, they may incur huge additional training overhead. In this
paper we find that features in CNNs can be also used for self-supervision. Thus
we creatively design the \emph{feature-based pretext task} which requires only
a small amount of additional training overhead. In our task we discard
different particular regions of features, and then train the model to
distinguish these different features. In order to fully apply our feature-based
pretext task in supervised learning, we also propose a novel learning framework
containing multi-classifiers for further improvement. Original labels will be
expanded to joint labels via self-supervision of feature transformations. With
more semantic information provided by our self-supervised tasks, this approach
can train CNNs more effectively. Extensive experiments on various supervised
learning tasks demonstrate the accuracy improvement and wide applicability of
our method.
Related papers
- DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning [75.68193159293425]
In-context learning (ICL) allows transformer-based language models to learn a specific task with a few "task demonstrations" without updating their parameters.
We propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL.
We experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.
arXiv Detail & Related papers (2024-05-22T15:52:52Z) - Improving Transferability of Representations via Augmentation-Aware
Self-Supervision [117.15012005163322]
AugSelf is an auxiliary self-supervised loss that learns the difference of augmentation parameters between two randomly augmented samples.
Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability.
AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost.
arXiv Detail & Related papers (2021-11-18T10:43:50Z) - Self-Supervised Visual Representation Learning Using Lightweight
Architectures [0.0]
In self-supervised learning, a model is trained to solve a pretext task, using a data set whose annotations are created by a machine.
We critically examine the most notable pretext tasks to extract features from image data.
We study the performance of various self-supervised techniques keeping all other parameters uniform.
arXiv Detail & Related papers (2021-10-21T14:13:10Z) - Combining Probabilistic Logic and Deep Learning for Self-Supervised
Learning [10.47937328610174]
Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck.
We present deep probabilistic logic, which offers a unifying framework for task-specific self-supervision.
Next, we present self-supervised self-supervision(S4), which adds to DPL the capability to learn new self-supervision automatically.
arXiv Detail & Related papers (2021-07-27T04:25:56Z) - Self-supervised Feature Enhancement: Applying Internal Pretext Task to
Supervised Learning [6.508466234920147]
We show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task.
Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels.
CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels.
arXiv Detail & Related papers (2021-06-09T08:59:35Z) - Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks [0.0]
Recent work on few-shot learning shows that quality of learned representations plays an important role in few-shot classification performance.
On the other hand, the goal of self-supervised learning is to recover useful semantic information of the data without the use of class labels.
We exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks.
arXiv Detail & Related papers (2021-01-24T23:21:43Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z) - Automatic Shortcut Removal for Self-Supervised Representation Learning [39.636691159890354]
In self-supervised visual representation learning, a feature extractor is trained on a "pretext task" for which labels can be generated cheaply, without human annotation.
Much work has gone into identifying such "shortcut" features and hand-designing schemes to reduce their effect.
We show that this assumption holds across common pretext tasks and datasets by training a "lens" network to make small image changes that maximally reduce performance in the pretext task.
arXiv Detail & Related papers (2020-02-20T16:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.