Variational Auto-Regressive Gaussian Processes for Continual Learning
- URL: http://arxiv.org/abs/2006.05468v3
- Date: Sat, 12 Jun 2021 06:13:39 GMT
- Title: Variational Auto-Regressive Gaussian Processes for Continual Learning
- Authors: Sanyam Kapoor, Theofanis Karaletsos, Thang D. Bui
- Abstract summary: We develop a principled posterior updating mechanism to solve sequential tasks in continual learning.
By relying on sparse inducing point approximations for scalable posteriors, we propose a novel auto-regressive variational distribution.
Mean predictive entropy estimates show VAR-GPs prevent catastrophic forgetting.
- Score: 17.43751039943161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Through sequential construction of posteriors on observing data online,
Bayes' theorem provides a natural framework for continual learning. We develop
Variational Auto-Regressive Gaussian Processes (VAR-GPs), a principled
posterior updating mechanism to solve sequential tasks in continual learning.
By relying on sparse inducing point approximations for scalable posteriors, we
propose a novel auto-regressive variational distribution which reveals two
fruitful connections to existing results in Bayesian inference, expectation
propagation and orthogonal inducing points. Mean predictive entropy estimates
show VAR-GPs prevent catastrophic forgetting, which is empirically supported by
strong performance on modern continual learning benchmarks against competitive
baselines. A thorough ablation study demonstrates the efficacy of our modeling
choices.
Related papers
- Hybrid Gaussian Process Regression with Temporal Feature Extraction for Partially Interpretable Remaining Useful Life Interval Prediction in Aeroengine Prognostics [0.615155791092452]
This paper introduces a modified Gaussian Process Regression (GPR) model for Remaining Useful Life (RUL) interval prediction.
The modified GPR predicts confidence intervals by learning from historical data and addresses uncertainty modeling in a more structured way.
It effectively captures intricate time-series patterns and dynamic behaviors inherent in modern manufacturing systems.
arXiv Detail & Related papers (2024-11-19T03:00:02Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Causal Graph Discovery from Self and Mutually Exciting Time Series [10.410454851418548]
We develop a non-asymptotic recovery guarantee and quantifiable uncertainty by solving a linear program.
We demonstrate the effectiveness of our approach in recovering highly interpretable causal DAGs over Sepsis Associated Derangements (SADs)
arXiv Detail & Related papers (2023-01-26T16:15:27Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Measuring and Reducing Model Update Regression in Structured Prediction
for NLP [31.86240946966003]
backward compatibility requires that the new model does not regress on cases that were correctly handled by its predecessor.
This work studies model update regression in structured prediction tasks.
We propose a simple and effective method, Backward-Congruent Re-ranking (BCR), by taking into account the characteristics of structured output.
arXiv Detail & Related papers (2022-02-07T07:04:54Z) - Variational Inference for Continuous-Time Switching Dynamical Systems [29.984955043675157]
We present a model based on an Markov jump process modulating a subordinated diffusion process.
We develop a new continuous-time variational inference algorithm.
We extensively evaluate our algorithm under the model assumption and for real-world examples.
arXiv Detail & Related papers (2021-09-29T15:19:51Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.