Aligned Multi-Task Gaussian Process
- URL: http://arxiv.org/abs/2110.15761v1
- Date: Fri, 29 Oct 2021 13:18:13 GMT
- Title: Aligned Multi-Task Gaussian Process
- Authors: Olga Mikheeva, Ieva Kazlauskaite, Adam Hartshorne, Hedvig
Kjellstr\"om, Carl Henrik Ek, Neill D. F. Campbell
- Abstract summary: Multi-task learning requires accurate identification of the correlations between tasks.
Traditional multi-task models do not account for this and subsequent errors in correlation estimation will result in poor predictive performance.
We introduce a method that automatically accounts for temporal misalignment in a unified generative model that improves predictive performance.
- Score: 12.903751268469696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning requires accurate identification of the correlations
between tasks. In real-world time-series, tasks are rarely perfectly temporally
aligned; traditional multi-task models do not account for this and subsequent
errors in correlation estimation will result in poor predictive performance and
uncertainty quantification. We introduce a method that automatically accounts
for temporal misalignment in a unified generative model that improves
predictive performance. Our method uses Gaussian processes (GPs) to model the
correlations both within and between the tasks. Building on the previous work
by Kazlauskaiteet al. [2019], we include a separate monotonic warp of the input
data to model temporal misalignment. In contrast to previous work, we formulate
a lower bound that accounts for uncertainty in both the estimates of the
warping process and the underlying functions. Also, our new take on a monotonic
stochastic process, with efficient path-wise sampling for the warp functions,
allows us to perform full Bayesian inference in the model rather than MAP
estimates. Missing data experiments, on synthetic and real time-series,
demonstrate the advantages of accounting for misalignments (vs standard
unaligned method) as well as modelling the uncertainty in the warping
process(vs baseline MAP alignment approach).
Related papers
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Deep Set Neural Networks for forecasting asynchronous bioprocess
timeseries [0.28675177318965045]
Cultivation experiments often produce sparse and irregular time series.
Most statistical and Machine Learning tools are not designed for handling sparse data out-of-the-box.
We show that Deep Set Neural Networks equipped with triplet encoding of the input data can successfully handle bio-process data without any need for imputation or alignment procedures.
arXiv Detail & Related papers (2023-12-04T17:46:57Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Better Batch for Deep Probabilistic Time Series Forecasting [15.31488551912888]
We propose an innovative training method that incorporates error autocorrelation to enhance probabilistic forecasting accuracy.
Our method constructs a mini-batch as a collection of $D$ consecutive time series segments for model training.
It explicitly learns a time-varying covariance matrix over each mini-batch, encoding error correlation among adjacent time steps.
arXiv Detail & Related papers (2023-05-26T15:36:59Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Causal Modeling with Stochastic Confounders [11.881081802491183]
This work extends causal inference with confounders.
We propose a new approach to variational estimation for causal inference based on a representer theorem with a random input space.
arXiv Detail & Related papers (2020-04-24T00:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.