Compressed Predictive Information Coding
- URL: http://arxiv.org/abs/2203.02051v1
- Date: Thu, 3 Mar 2022 22:47:58 GMT
- Title: Compressed Predictive Information Coding
- Authors: Rui Meng, Tianyi Luo, Kristofer Bouchard
- Abstract summary: We develop a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract useful representations from dynamic data.
We derive variational bounds of the CPIC loss which induces the latent space to capture information that is maximally predictive.
We demonstrate that CPIC is able to recover the latent space of noisy dynamical systems with low signal-to-noise ratios.
- Score: 6.220929746808418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning plays an important role in many fields, such as
artificial intelligence, machine learning, and neuroscience. Compared to static
data, methods for extracting low-dimensional structure for dynamic data are
lagging. We developed a novel information-theoretic framework, Compressed
Predictive Information Coding (CPIC), to extract useful representations from
dynamic data. CPIC selectively projects the past (input) into a linear subspace
that is predictive about the compressed data projected from the future
(output). The key insight of our framework is to learn representations by
minimizing the compression complexity and maximizing the predictive information
in latent space. We derive variational bounds of the CPIC loss which induces
the latent space to capture information that is maximally predictive. Our
variational bounds are tractable by leveraging bounds of mutual information. We
find that introducing stochasticity in the encoder robustly contributes to
better representation. Furthermore, variational approaches perform better in
mutual information estimation compared with estimates under a Gaussian
assumption. We demonstrate that CPIC is able to recover the latent space of
noisy dynamical systems with low signal-to-noise ratios, and extracts features
predictive of exogenous variables in neuroscience data.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Predictive variational autoencoder for learning robust representations
of time-series data [0.0]
We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features.
We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
arXiv Detail & Related papers (2023-12-12T02:06:50Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - IB-UQ: Information bottleneck based uncertainty quantification for
neural function regression and neural operator learning [11.5992081385106]
We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks.
We incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data.
We also propose a data augmentation based information bottleneck objective which can enhance the quality of the extrapolation uncertainty.
arXiv Detail & Related papers (2023-02-07T05:56:42Z) - Palm up: Playing in the Latent Manifold for Unsupervised Pretraining [31.92145741769497]
We propose an algorithm that exhibits an exploratory behavior whilst it utilizes large diverse datasets.
Our key idea is to leverage deep generative models that are pretrained on static datasets and introduce a dynamic model in the latent space.
We then employ an unsupervised reinforcement learning algorithm to explore in this environment and perform unsupervised representation learning on the collected data.
arXiv Detail & Related papers (2022-10-19T22:26:12Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Quantifying Relevance in Learning and Inference [0.0]
We review recent progress on understanding learning, based on the notion of "relevance"
These are ideal limits of samples and of machines, that contain the maximal amount of information about the unknown generative process.
Maximally informative samples are characterised by a power-law frequency distribution (statistical criticality) and optimal learning machines by an anomalously large susceptibility.
arXiv Detail & Related papers (2022-02-01T11:16:04Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - Multilinear Compressive Learning with Prior Knowledge [106.12874293597754]
Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
arXiv Detail & Related papers (2020-02-17T19:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.