Parameter selection in Gaussian process interpolation: an empirical
study of selection criteria
- URL: http://arxiv.org/abs/2107.06006v5
- Date: Tue, 8 Aug 2023 08:54:56 GMT
- Title: Parameter selection in Gaussian process interpolation: an empirical
study of selection criteria
- Authors: S\'ebastien Petit (L2S, GdR MASCOT-NUM), Julien Bect (L2S, GdR
MASCOT-NUM), Paul Feliot, Emmanuel Vazquez (L2S, GdR MASCOT-NUM)
- Abstract summary: This article revisits the fundamental problem of parameter selection for Gaussian process.
We show that the choice of an appropriate family of models is often more important than the choice of a particular selection criterion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article revisits the fundamental problem of parameter selection for
Gaussian process interpolation. By choosing the mean and the covariance
functions of a Gaussian process within parametric families, the user obtains a
family of Bayesian procedures to perform predictions about the unknown
function, and must choose a member of the family that will hopefully provide
good predictive performances. We base our study on the general concept of
scoring rules, which provides an effective framework for building leave-one-out
selection and validation criteria, and a notion of extended likelihood criteria
based on an idea proposed by Fasshauer and co-authors in 2009, which makes it
possible to recover standard selection criteria such as, for instance, the
generalized cross-validation criterion. Under this setting, we empirically show
on several test problems of the literature that the choice of an appropriate
family of models is often more important than the choice of a particular
selection criterion (e.g., the likelihood versus a leave-one-out selection
criterion). Moreover, our numerical results show that the regularity parameter
of a Mat{\'e}rn covariance can be selected effectively by most selection
criteria.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Confidence on the Focal: Conformal Prediction with Selection-Conditional Coverage [6.010965256037659]
Conformal prediction builds marginally valid prediction intervals that cover the unknown outcome of a randomly drawn new test point with a prescribed probability.
In such cases, marginally valid conformal prediction intervals may not provide valid coverage for the focal unit(s) due to selection bias.
This paper presents a general framework for constructing a prediction set with finite-sample exact coverage conditional on the unit being selected.
arXiv Detail & Related papers (2024-03-06T17:18:24Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Finding Optimal Diverse Feature Sets with Alternative Feature Selection [0.0]
We introduce alternative feature selection and formalize it as an optimization problem.
In particular, we define alternatives via constraints and enable users to control the number and dissimilarity of alternatives.
We show that a constant-factor approximation exists under certain conditions and propose corresponding search methods.
arXiv Detail & Related papers (2023-07-21T14:23:41Z) - Selective inference using randomized group lasso estimators for general models [3.4034453928075865]
The method includes the use of exponential family distributions, as well as quasi-likelihood modeling for overdispersed count data.
A randomized group-regularized optimization problem is studied.
Confidence regions for the regression parameters in the selected model take the form of Wald-type regions and are shown to have bounded volume.
arXiv Detail & Related papers (2023-06-24T01:14:26Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Black-box Selective Inference via Bootstrapping [5.960626580825523]
Conditional selective inference requires an exact characterization of the selection event, which is often unavailable except for a few examples like the lasso.
This work addresses this challenge by introducing a generic approach to estimate the selection event, facilitating feasible inference conditioned on the selection event.
arXiv Detail & Related papers (2022-03-28T05:18:21Z) - Selective machine learning of doubly robust functionals [6.880360838661036]
We propose a selective machine learning framework for making inferences about a finite-dimensional functional defined on a semiparametric model.
We introduce a new selection criterion aimed at bias reduction in estimating the functional of interest based on a novel definition of pseudo-risk.
arXiv Detail & Related papers (2019-11-05T19:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.