Unsupervisedly Learned Representations: Should the Quest be Over?
- URL: http://arxiv.org/abs/2001.07495v5
- Date: Thu, 26 Sep 2024 11:42:25 GMT
- Title: Unsupervisedly Learned Representations: Should the Quest be Over?
- Authors: Daniel N. Nissani,
- Abstract summary: We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals.
The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: After four decades of research there still exists a Classification accuracy gap of about 20% between our best Unsupervisedly Learned Representations methods and the accuracy rates achieved by intelligent animals. It thus may well be that we are looking in the wrong direction. A possible solution to this puzzle is presented. We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals. Our main modest contribution lies in the observations that: a. when applied to a real world environment Reinforcement Learning does not require labels, and thus may be legitimately considered as Unsupervised Learning, and b. in contrast, when Reinforcement Learning is applied in a simulated environment it does inherently require labels and should thus be generally be considered as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile.
Related papers
- Unsupervised Representation Learning in Partially Observable Atari Games [10.299850596045395]
State representation learning aims to capture latent factors of an environment.
Contrastive methods have performed better than generative models in previous state representation learning research.
In this article, we create an unsupervised state representation learning scheme for partially observable states.
arXiv Detail & Related papers (2023-03-13T19:34:10Z) - Contrastive Learning for OOD in Object detection [0.0]
Contrastive learning is commonly applied to self-supervised learning.
Large batch sizes and memory banks have made it difficult and slow to train.
We show that our results are comparable to Supervised Contrastive Learning for image classification and object detection.
arXiv Detail & Related papers (2022-08-12T01:51:50Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Autonomous Reinforcement Learning: Formalism and Benchmarking [106.25788536376007]
Real-world embodied learning, such as that performed by humans and animals, is situated in a continual, non-episodic world.
Common benchmark tasks in RL are episodic, with the environment resetting between trials to provide the agent with multiple attempts.
This discrepancy presents a major challenge when attempting to take RL algorithms developed for episodic simulated environments and run them on real-world platforms.
arXiv Detail & Related papers (2021-12-17T16:28:06Z) - Learning more skills through optimistic exploration [5.973112138143177]
Unsupervised skill learning objectives allow agents to learn rich repertoires of behavior in the absence of extrinsic rewards.
An inherent exploration problem lingers: when a novel state is actually encountered, the discriminator will not have seen enough training data to produce accurate and confident skill classifications.
We derive an information gain auxiliary objective that involves training an ensemble of discriminators and rewarding the policy for their disagreement.
Our objective directly estimates the uncertainty that comes from the discriminator not having seen enough training examples, thus providing an intrinsic reward more tailored to the true objective.
arXiv Detail & Related papers (2021-07-29T17:58:04Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z) - Rethinking Class Relations: Absolute-relative Supervised and
Unsupervised Few-shot Learning [157.62595449130973]
We study the fundamental problem of simplistic class modeling in current few-shot learning methods.
We propose a novel Absolute-relative Learning paradigm to fully take advantage of label information to refine the image representations.
arXiv Detail & Related papers (2020-01-12T12:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.