Zero-Episode Few-Shot Contrastive Predictive Coding: Solving
intelligence tests without prior training
- URL: http://arxiv.org/abs/2205.01924v1
- Date: Wed, 4 May 2022 07:46:03 GMT
- Title: Zero-Episode Few-Shot Contrastive Predictive Coding: Solving
intelligence tests without prior training
- Authors: T. Barak, Y. Loewenstein
- Abstract summary: We argue that finding a predictive latent variable and using it to evaluate the consistency of a future image enables data-efficient predictions.
We show that a one-dimensional Markov Contrastive Predictive Coding model solves sequence completion intelligence tests efficiently.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video prediction models often combine three components: an encoder from pixel
space to a small latent space, a latent space prediction model, and a
generative model back to pixel space. However, the large and unpredictable
pixel space makes training such models difficult, requiring many training
examples. We argue that finding a predictive latent variable and using it to
evaluate the consistency of a future image enables data-efficient predictions
because it precludes the necessity of a generative model training. To
demonstrate it, we created sequence completion intelligence tests in which the
task is to identify a predictably changing feature in a sequence of images and
use this prediction to select the subsequent image. We show that a
one-dimensional Markov Contrastive Predictive Coding (M-CPC_1D) model solves
these tests efficiently, with only five examples. Finally, we demonstrate the
usefulness of M-CPC_1D in solving two tasks without prior training: anomaly
detection and stochastic movement video prediction.
Related papers
- Predicting Long-horizon Futures by Conditioning on Geometry and Time [49.86180975196375]
We explore the task of generating future sensor observations conditioned on the past.
We leverage the large-scale pretraining of image diffusion models which can handle multi-modality.
We create a benchmark for video prediction on a diverse set of videos spanning indoor and outdoor scenes.
arXiv Detail & Related papers (2024-04-17T16:56:31Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Prototype Learning for Explainable Brain Age Prediction [1.104960878651584]
We present ExPeRT, an explainable prototype-based model specifically designed for regression tasks.
Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels.
Our approach achieved state-of-the-art prediction performance while providing insight into the model's reasoning process.
arXiv Detail & Related papers (2023-06-16T14:13:21Z) - P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with
Point-to-Pixel Prompting [94.11915008006483]
We propose a novel Point-to-Pixel prompting for point cloud analysis.
Our method attains 89.3% accuracy on the hardest setting of ScanObjectNN.
Our framework also exhibits very competitive performance on ModelNet classification and ShapeNet Part Code.
arXiv Detail & Related papers (2022-08-04T17:59:03Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - A Hierarchical Variational Neural Uncertainty Model for Stochastic Video
Prediction [45.6432265855424]
We introduce Neural Uncertainty Quantifier (NUQ) - a principled quantification of the model's predictive uncertainty.
Our proposed framework trains more effectively compared to the state-of-theart models.
arXiv Detail & Related papers (2021-10-06T00:25:22Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Aligned Contrastive Predictive Coding [10.521845940927163]
We investigate the possibility of forcing a self-supervised model trained using a contrastive predictive loss to extract slowly varying latent representations.
Rather than producing individual predictions for each of the future representations, the model emits a sequence of predictions shorter than that of the upcoming representations to which they will be aligned.
arXiv Detail & Related papers (2021-04-24T13:07:22Z) - Demystifying Code Summarization Models [5.608277537412537]
We evaluate four prominent code summarization models: extreme summarizer, code2vec, code2seq, and sequence GNN.
Results show that all models base their predictions on syntactic and lexical properties with little to none semantic implication.
We present a novel approach to explaining the predictions of code summarization models through the lens of training data.
arXiv Detail & Related papers (2021-02-09T03:17:46Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.