From latent dynamics to meaningful representations
- URL: http://arxiv.org/abs/2209.00905v4
- Date: Tue, 9 Apr 2024 18:34:52 GMT
- Title: From latent dynamics to meaningful representations
- Authors: Dedi Wang, Yihang Wang, Luke Evans, Pratyush Tiwary,
- Abstract summary: We propose a purely dynamics-constrained representation learning framework.
We show this is a more natural constraint for representation learning in dynamical systems.
We validate our framework for different systems including a real-world fluorescent DNA movie dataset.
- Score: 0.5728954513076778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While representation learning has been central to the rise of machine learning and artificial intelligence, a key problem remains in making the learned representations meaningful. For this, the typical approach is to regularize the learned representation through prior probability distributions. However, such priors are usually unavailable or are ad hoc. To deal with this, recent efforts have shifted towards leveraging the insights from physical principles to guide the learning process. In this spirit, we propose a purely dynamics-constrained representation learning framework. Instead of relying on predefined probabilities, we restrict the latent representation to follow overdamped Langevin dynamics with a learnable transition density - a prior driven by statistical mechanics. We show this is a more natural constraint for representation learning in stochastic dynamical systems, with the crucial ability to uniquely identify the ground truth representation. We validate our framework for different systems including a real-world fluorescent DNA movie dataset. We show that our algorithm can uniquely identify orthogonal, isometric and meaningful latent representations.
Related papers
- Identifiable Representation and Model Learning for Latent Dynamic Systems [0.0]
We study the problem of identifiable representation and model learning for latent dynamic systems.
We prove that, for linear or affine nonlinear latent dynamic systems, it is possible to identify the representations up to scaling and determine the models up to some simple transformations.
arXiv Detail & Related papers (2024-10-23T13:55:42Z) - Stochastic Vision Transformers with Wasserstein Distance-Aware Attention [8.407731308079025]
Self-supervised learning is one of the most promising approaches to acquiring knowledge from limited labeled data.
We introduce a new vision transformer that integrates uncertainty and distance awareness into self-supervised learning pipelines.
Our proposed method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on a variety of datasets.
arXiv Detail & Related papers (2023-11-30T15:53:37Z) - Learning invariant representations of time-homogeneous stochastic dynamical systems [27.127773672738535]
We study the problem of learning a representation of the state that faithfully captures its dynamics.
This is instrumental to learning the transfer operator or the generator of the system.
We show that the search for a good representation can be cast as an optimization problem over neural networks.
arXiv Detail & Related papers (2023-07-19T11:32:24Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Robust and Controllable Object-Centric Learning through Energy-based
Models [95.68748828339059]
ours is a conceptually simple and general approach to learning object-centric representations through an energy-based model.
We show that ours can be easily integrated into existing architectures and can effectively extract high-quality object-centric representations.
arXiv Detail & Related papers (2022-10-11T15:11:15Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - Symplectic Momentum Neural Networks -- Using Discrete Variational
Mechanics as a prior in Deep Learning [7.090165638014331]
This paper introduces Sympic Momentum Networks (SyMo) as models from a discrete formulation of mechanics for non-separable mechanical systems.
We show that such combination not only allows these models tol earn from limited data but also provides the models with the capability of preserving the symplectic form and show better long-term behaviour.
arXiv Detail & Related papers (2022-01-20T16:33:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.