Video Prediction via Example Guidance
- URL: http://arxiv.org/abs/2007.01738v1
- Date: Fri, 3 Jul 2020 14:57:24 GMT
- Title: Video Prediction via Example Guidance
- Authors: Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Trevor Darrell
- Abstract summary: In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics.
In this work, we propose a simple yet effective framework that can efficiently predict plausible future states.
- Score: 156.08546987158616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In video prediction tasks, one major challenge is to capture the multi-modal
nature of future contents and dynamics. In this work, we propose a simple yet
effective framework that can efficiently predict plausible future states. The
key insight is that the potential distribution of a sequence could be
approximated with analogous ones in a repertoire of training pool, namely,
expert examples. By further incorporating a novel optimization scheme into the
training procedure, plausible predictions can be sampled efficiently from
distribution constructed from the retrieved examples. Meanwhile, our method
could be seamlessly integrated with existing stochastic predictive models;
significant enhancement is observed with comprehensive experiments in both
quantitative and qualitative aspects. We also demonstrate the generalization
ability to predict the motion of unseen class, i.e., without access to
corresponding data during training phase.
Related papers
- Motion Forecasting via Model-Based Risk Minimization [8.766024024417316]
We propose a novel sampling method applicable to trajectory prediction based on the predictions of multiple models.
We first show that conventional sampling based on predicted probabilities can degrade performance due to missing alignment between models.
By using state-of-the-art models as base learners, our approach constructs diverse and effective ensembles for optimal trajectory sampling.
arXiv Detail & Related papers (2024-09-16T09:03:28Z) - Discussion: Effective and Interpretable Outcome Prediction by Training Sparse Mixtures of Linear Experts [4.178382980763478]
We propose to train a sparse Mixture-of-Experts where both the gate'' and expert'' sub-nets are Logistic Regressors.
This ensemble-like model is trained end-to-end while automatically selecting a subset of input features in each sub-net.
arXiv Detail & Related papers (2024-07-18T13:59:10Z) - A Supervised Contrastive Learning Pretrain-Finetune Approach for Time
Series [15.218841180577135]
We introduce a novel pretraining procedure that leverages supervised contrastive learning to distinguish features within each pretraining dataset.
We then propose a fine-tuning procedure designed to enhance the accurate prediction of the target data by aligning it more closely with the learned dynamics of the pretraining datasets.
arXiv Detail & Related papers (2023-11-21T02:06:52Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment [72.50906475214457]
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events.
In practice, the next-event prediction models are trained with sequential data collected at one time.
We propose a framework with hierarchical branching structures for learning context-specific representations.
arXiv Detail & Related papers (2022-10-24T07:54:13Z) - Finding Islands of Predictability in Action Forecasting [7.215559809521136]
We show that future action sequences are more accurately modeled with variable, rather than one, levels of abstraction.
We propose a combination Bayesian neural network and hierarchical convolutional segmentation model to both accurately predict future actions and optimally select abstraction levels.
arXiv Detail & Related papers (2022-10-13T21:01:16Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Energy-Based Generative Cooperative Saliency Prediction [44.85865238229076]
We study the saliency prediction problem from the perspective of generative models.
We propose a generative cooperative saliency prediction framework based on the generative cooperative networks.
Experimental results show that our generative model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-06-25T02:11:50Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.