Towards new cross-validation-based estimators for Gaussian process
regression: efficient adjoint computation of gradients
- URL: http://arxiv.org/abs/2002.11543v2
- Date: Thu, 6 Aug 2020 11:25:04 GMT
- Title: Towards new cross-validation-based estimators for Gaussian process
regression: efficient adjoint computation of gradients
- Authors: S\'ebastien Petit (L2S, GdR MASCOT-NUM), Julien Bect (L2S, GdR
MASCOT-NUM), S\'ebastien da Veiga (GdR MASCOT-NUM), Paul Feliot (GdR
MASCOT-NUM), Emmanuel Vazquez (L2S, GdR MASCOT-NUM)
- Abstract summary: We suggest using new cross-validation criteria derived from the literature of scoring rules.
We also provide an efficient method for computing the gradient of a cross-validation criterion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of estimating the parameters of the covariance
function of a Gaussian process by cross-validation. We suggest using new
cross-validation criteria derived from the literature of scoring rules. We also
provide an efficient method for computing the gradient of a cross-validation
criterion. To the best of our knowledge, our method is more efficient than what
has been proposed in the literature so far. It makes it possible to lower the
complexity of jointly evaluating leave-one-out criteria and their gradients.
Related papers
- Online Estimation with Rolling Validation: Adaptive Nonparametric Estimation with Streaming Data [13.069717985067937]
We propose a weighted rolling-validation procedure, an online variant of leave-one-out cross-validation.
Similar to batch cross-validation, it can boost base estimators to achieve a better, adaptive convergence rate.
arXiv Detail & Related papers (2023-10-18T17:52:57Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Stability-Adjusted Cross-Validation for Sparse Linear Regression [5.156484100374059]
Cross-validation techniques like k-fold cross-validation substantially increase the computational cost of sparse regression.
We propose selecting hyper parameters that minimize a weighted sum of a cross-validation metric and a model's output stability.
Our confidence adjustment procedure reduces test set error by 2%, on average, on 13 real-world datasets.
arXiv Detail & Related papers (2023-06-26T17:02:45Z) - Positive definite nonparametric regression using an evolutionary
algorithm with application to covariance function estimation [0.0]
We propose a novel nonparametric regression framework for estimating covariance functions of stationary processes.
Our method can impose positive definiteness, as well as isotropy and monotonicity, on the estimators.
Our method provides more reliable estimates for long-range dependence.
arXiv Detail & Related papers (2023-04-25T22:01:14Z) - Scalable Gaussian-process regression and variable selection using
Vecchia approximations [3.4163060063961255]
Vecchia-based mini-batch subsampling provides unbiased gradient estimators.
We propose Vecchia-based mini-batch subsampling, which provides unbiased gradient estimators.
arXiv Detail & Related papers (2022-02-25T21:22:38Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Calibrated Adaptive Probabilistic ODE Solvers [31.442275669185626]
We introduce, discuss, and assess several probabilistically motivated ways to calibrate the uncertainty estimate.
We demonstrate the efficiency of the methodology by benchmarking against the classic, widely used Dormand-Prince 4/5 Runge-Kutta method.
arXiv Detail & Related papers (2020-12-15T10:48:55Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.