Empirical Evaluation and Theoretical Analysis for Representation
Learning: A Survey
- URL: http://arxiv.org/abs/2204.08226v1
- Date: Mon, 18 Apr 2022 09:18:47 GMT
- Title: Empirical Evaluation and Theoretical Analysis for Representation
Learning: A Survey
- Authors: Kento Nozawa, Issei Sato
- Abstract summary: representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task.
Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks.
- Score: 25.5633960013493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning enables us to automatically extract generic feature
representations from a dataset to solve another machine learning task.
Recently, extracted feature representations by a representation learning
algorithm and a simple predictor have exhibited state-of-the-art performance on
several machine learning tasks. Despite its remarkable progress, there exist
various ways to evaluate representation learning algorithms depending on the
application because of the flexibility of representation learning. To
understand the current representation learning, we review evaluation methods of
representation learning algorithms and theoretical analyses. On the basis of
our evaluation survey, we also discuss the future direction of representation
learning. Note that this survey is the extended version of Nozawa and Sato
(2022).
Related papers
- Provable Representation with Efficient Planning for Partial Observable Reinforcement Learning [74.67655210734338]
In most real-world reinforcement learning applications, state information is only partially observable, which breaks the Markov decision process assumption.
We develop a representation-based perspective that leads to a coherent framework and tractable algorithmic approach for practical reinforcement learning from partial observations.
We empirically demonstrate the proposed algorithm can surpass state-of-the-art performance with partial observations across various benchmarks.
arXiv Detail & Related papers (2023-11-20T23:56:58Z) - A Quantitative Approach to Predicting Representational Learning and
Performance in Neural Networks [5.544128024203989]
Key property of neural networks is how they learn to represent and manipulate input information in order to solve a task.
We introduce a new pseudo-kernel based tool for analyzing and predicting learned representations.
arXiv Detail & Related papers (2023-07-14T18:39:04Z) - Bootstrapped Representations in Reinforcement Learning [44.49675960752777]
In reinforcement learning (RL), state representations are key to dealing with large or continuous state spaces.
We provide a theoretical characterization of the state representation learnt by temporal difference learning.
We describe the efficacy of these representations for policy evaluation, and use our theoretical analysis to design new auxiliary learning rules.
arXiv Detail & Related papers (2023-06-16T20:14:07Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Desiderata for Representation Learning: A Causal Perspective [104.3711759578494]
We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
arXiv Detail & Related papers (2021-09-08T17:33:54Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Multivariate Business Process Representation Learning utilizing Gramian
Angular Fields and Convolutional Neural Networks [0.0]
Learning meaningful representations of data is an important aspect of machine learning.
For predictive process analytics, it is essential to have all explanatory characteristics of a process instance available.
We propose a novel approach for representation learning of business process instances.
arXiv Detail & Related papers (2021-06-15T10:21:14Z) - Instance-Based Learning of Span Representations: A Case Study through
Named Entity Recognition [48.06319154279427]
We present a method of instance-based learning that learns similarities between spans.
Our method enables to build models that have high interpretability without sacrificing performance.
arXiv Detail & Related papers (2020-04-29T23:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.