Progressive Stage-wise Learning for Unsupervised Feature Representation
Enhancement
- URL: http://arxiv.org/abs/2106.05554v2
- Date: Fri, 11 Jun 2021 13:50:38 GMT
- Title: Progressive Stage-wise Learning for Unsupervised Feature Representation
Enhancement
- Authors: Zefan Li, Chenxi Liu, Alan Yuille, Bingbing Ni, Wenjun Zhang and Wen
Gao
- Abstract summary: We propose the Progressive Stage-wise Learning (PSL) framework for unsupervised learning.
Our experiments show that PSL consistently improves results for the leading unsupervised learning methods.
- Score: 83.49553735348577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning methods have recently shown their competitiveness
against supervised training. Typically, these methods use a single objective to
train the entire network. But one distinct advantage of unsupervised over
supervised learning is that the former possesses more variety and freedom in
designing the objective. In this work, we explore new dimensions of
unsupervised learning by proposing the Progressive Stage-wise Learning (PSL)
framework. For a given unsupervised task, we design multilevel tasks and define
different learning stages for the deep network. Early learning stages are
forced to focus on lowlevel tasks while late stages are guided to extract
deeper information through harder tasks. We discover that by progressive
stage-wise learning, unsupervised feature representation can be effectively
enhanced. Our extensive experiments show that PSL consistently improves results
for the leading unsupervised learning methods.
Related papers
- Revisiting Supervision for Continual Representation Learning [1.0030878538350796]
In this work, we reexamine the role of supervision in continual representation learning.
We show that supervised models when enhanced with a multi-layer perceptron head, can outperform self-supervised models in continual representation learning.
This highlights the importance of the multi-layer perceptron projector in shaping feature transferability across a sequence of tasks in continual learning.
arXiv Detail & Related papers (2023-11-22T11:24:04Z) - Bootstrap Your Own Skills: Learning to Solve New Tasks with Large
Language Model Guidance [66.615355754712]
BOSS learns to accomplish new tasks by performing "skill bootstrapping"
We demonstrate through experiments in realistic household environments that agents trained with our LLM-guided bootstrapping procedure outperform those trained with naive bootstrapping.
arXiv Detail & Related papers (2023-10-16T02:43:47Z) - Domain-Aware Augmentations for Unsupervised Online General Continual
Learning [7.145581090959242]
This paper proposes a novel approach that enhances memory usage for contrastive learning in Unsupervised Online General Continual Learning (UOGCL)
Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups.
Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.
arXiv Detail & Related papers (2023-09-13T11:45:21Z) - Learning from Guided Play: A Scheduled Hierarchical Approach for
Improving Exploration in Adversarial Imitation Learning [7.51557557629519]
We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of, in addition to a main task, multiple auxiliary tasks.
This affords many benefits: learning efficiency is improved for main tasks with challenging bottleneck transitions, expert data becomes reusable between tasks, and transfer learning through the reuse of learned auxiliary task models becomes possible.
arXiv Detail & Related papers (2021-12-16T14:58:08Z) - Rethinking the Representational Continuity: Towards Unsupervised
Continual Learning [45.440192267157094]
Unsupervised continual learning (UCL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge.
We show that reliance on annotated data is not necessary for continual learning.
We propose Lifelong Unsupervised Mixup (LUMP) to alleviate catastrophic forgetting for unsupervised representations.
arXiv Detail & Related papers (2021-10-13T18:38:06Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Self-supervised Knowledge Distillation for Few-shot Learning [123.10294801296926]
Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples.
We propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks.
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T11:27:00Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.