A Showcase of the Use of Autoencoders in Feature Learning Applications
- URL: http://arxiv.org/abs/2005.04321v1
- Date: Fri, 8 May 2020 23:56:26 GMT
- Title: A Showcase of the Use of Autoencoders in Feature Learning Applications
- Authors: David Charte, Francisco Charte, Mar\'ia J. del Jesus, Francisco
Herrera
- Abstract summary: Autoencoders are techniques for data representation learning based on artificial neural networks.
This work presents these applications and provides details on how autoencoders can perform them, including code samples making use of an R package with an easy-to-use interface for autoencoder design and training.
- Score: 11.329636084818778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoders are techniques for data representation learning based on
artificial neural networks. Differently to other feature learning methods which
may be focused on finding specific transformations of the feature space, they
can be adapted to fulfill many purposes, such as data visualization, denoising,
anomaly detection and semantic hashing. This work presents these applications
and provides details on how autoencoders can perform them, including code
samples making use of an R package with an easy-to-use interface for
autoencoder design and training, \texttt{ruta}. Along the way, the explanations
on how each learning task has been achieved are provided with the aim to help
the reader design their own autoencoders for these or other objectives.
Related papers
- Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks [0.923607423080658]
This paper describes instruments and the machine learning models used for validating them.
We have used the data collected in an introductory programming course in the penultimate week of the semester.
Preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory.
arXiv Detail & Related papers (2024-04-03T05:07:01Z) - Fusing Climate Data Products using a Spatially Varying Autoencoder [0.5825410941577593]
This research focuses on creating an identifiable and interpretable autoencoder.
The proposed autoencoder utilizes a Bayesian statistical framework.
We demonstrate the utility of the autoencoder by combining information from multiple precipitation products in High Mountain Asia.
arXiv Detail & Related papers (2024-03-12T17:03:07Z) - Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision [93.90545426665999]
We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision.
A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well.
It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.
arXiv Detail & Related papers (2023-03-30T13:42:58Z) - KRNet: Towards Efficient Knowledge Replay [50.315451023983805]
A knowledge replay technique has been widely used in many tasks such as continual learning and continuous domain adaptation.
We propose a novel and efficient knowledge recording network (KRNet) which directly maps an arbitrary sample identity number to the corresponding datum.
Our KRNet requires significantly less storage cost for the latent codes and can be trained without the encoder sub-network.
arXiv Detail & Related papers (2022-05-23T08:34:17Z) - Video Exploration via Video-Specific Autoencoders [60.256055890647595]
We present video-specific autoencoders that enables human-controllable video exploration.
We observe that a simple autoencoder trained on multiple frames of a specific video enables one to perform a large variety of video processing and editing tasks.
arXiv Detail & Related papers (2021-03-31T17:56:13Z) - Training Stacked Denoising Autoencoders for Representation Learning [0.0]
We implement stacked autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data.
We describe gradient descent for unsupervised training of autoencoders, as well as a novel genetic algorithm based approach that makes use of gradient information.
arXiv Detail & Related papers (2021-02-16T08:18:22Z) - Feature Learning for Accelerometer based Gait Recognition [0.0]
Autoencoders are very close to discriminative end-to-end models with regards to their feature learning ability.
fully convolutional models are able to learn good feature representations, regardless of the training strategy.
arXiv Detail & Related papers (2020-07-31T10:58:01Z) - An analysis on the use of autoencoders for representation learning:
fundamentals, learning task case studies, explainability and challenges [11.329636084818778]
In many machine learning tasks, learning a good representation of the data can be the key to building a well-performant solution.
We present a series of learning tasks: data embedding for visualization, image denoising, semantic hashing, detection of abnormal behaviors and instance generation.
A solution is proposed for each task employing autoencoders as the only learning method.
arXiv Detail & Related papers (2020-05-21T08:41:57Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.