On Contrastive Representations of Stochastic Processes
- URL: http://arxiv.org/abs/2106.10052v1
- Date: Fri, 18 Jun 2021 11:00:24 GMT
- Title: On Contrastive Representations of Stochastic Processes
- Authors: Emile Mathieu, Adam Foster, Yee Whye Teh
- Abstract summary: Learning representations of processes is an emerging problem in machine learning.
We show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.
- Score: 53.21653429290478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning representations of stochastic processes is an emerging problem in
machine learning with applications from meta-learning to physical object models
to time series. Typical methods rely on exact reconstruction of observations,
but this approach breaks down as observations become high-dimensional or noise
distributions become complex. To address this, we propose a unifying framework
for learning contrastive representations of stochastic processes (CRESP) that
does away with exact reconstruction. We dissect potential use cases for
stochastic process representations, and propose methods that accommodate each.
Empirically, we show that our methods are effective for learning
representations of periodic functions, 3D objects and dynamical processes. Our
methods tolerate noisy high-dimensional observations better than traditional
approaches, and the learned representations transfer to a range of downstream
tasks.
Related papers
- Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Learning invariant representations of time-homogeneous stochastic dynamical systems [27.127773672738535]
We study the problem of learning a representation of the state that faithfully captures its dynamics.
This is instrumental to learning the transfer operator or the generator of the system.
We show that the search for a good representation can be cast as an optimization problem over neural networks.
arXiv Detail & Related papers (2023-07-19T11:32:24Z) - Score-based Data Assimilation [7.215767098253208]
We introduce score-based data assimilation for trajectory inference.
We learn a score-based generative model of state trajectories based on the key insight that the score of an arbitrarily long trajectory can be decomposed into a series of scores over short segments.
arXiv Detail & Related papers (2023-06-18T14:22:03Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Efficient Iterative Amortized Inference for Learning Symmetric and
Disentangled Multi-Object Representations [8.163697683448811]
We introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations.
We show that optimization challenges caused by requiring both symmetry and disentanglement can be addressed by high-cost iterative amortized inference.
We demonstrate strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference.
arXiv Detail & Related papers (2021-06-07T14:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.