Laplacian Denoising Autoencoder
- URL: http://arxiv.org/abs/2003.13623v1
- Date: Mon, 30 Mar 2020 16:52:39 GMT
- Title: Laplacian Denoising Autoencoder
- Authors: Jianbo Jiao, Linchao Bao, Yunchao Wei, Shengfeng He, Honghui Shi,
Rynson Lau and Thomas S. Huang
- Abstract summary: We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
- Score: 114.21219514831343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks have been shown to perform remarkably well in many
machine learning tasks, labeling a large amount of ground truth data for
supervised training is usually very costly to scale. Therefore, learning robust
representations with unlabeled data is critical in relieving human effort and
vital for many downstream tasks. Recent advances in unsupervised and
self-supervised learning approaches for visual data have benefited greatly from
domain knowledge. Here we are interested in a more generic unsupervised
learning framework that can be easily generalized to other domains. In this
paper, we propose to learn data representations with a novel type of denoising
autoencoder, where the noisy input data is generated by corrupting latent clean
data in the gradient domain. This can be naturally generalized to span multiple
scales with a Laplacian pyramid representation of the input data. In this way,
the agent learns more robust representations that exploit the underlying data
structures across multiple scales. Experiments on several visual benchmarks
demonstrate that better representations can be learned with the proposed
approach, compared to its counterpart with single-scale corruption and other
approaches. Furthermore, we also demonstrate that the learned representations
perform well when transferring to other downstream vision tasks.
Related papers
- Palm up: Playing in the Latent Manifold for Unsupervised Pretraining [31.92145741769497]
We propose an algorithm that exhibits an exploratory behavior whilst it utilizes large diverse datasets.
Our key idea is to leverage deep generative models that are pretrained on static datasets and introduce a dynamic model in the latent space.
We then employ an unsupervised reinforcement learning algorithm to explore in this environment and perform unsupervised representation learning on the collected data.
arXiv Detail & Related papers (2022-10-19T22:26:12Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Pretrained Encoders are All You Need [23.171881382391074]
Self-supervised models have shown successful transfer to diverse settings.
We also explore fine-tuning pretrained representations with self-supervised techniques.
Our results show that pretrained representations are at par with state-of-the-art self-supervised methods trained on domain-specific data.
arXiv Detail & Related papers (2021-06-09T15:27:25Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.