Gradient-based explanations for Gaussian Process regression and
classification models
- URL: http://arxiv.org/abs/2205.12797v1
- Date: Wed, 25 May 2022 14:11:00 GMT
- Title: Gradient-based explanations for Gaussian Process regression and
classification models
- Authors: Sarem Seitz
- Abstract summary: Gaussian Processes (GPs) have proven themselves as a reliable and effective method in probabilistic Machine Learning.
Thanks to recent and current advances, modeling complex data with GPs is becoming more and more feasible.
We see an increasing interest in so-called explainable approaches - methods that aim to make a Machine Learning model's decision process transparent to humans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Processes (GPs) have proven themselves as a reliable and effective
method in probabilistic Machine Learning. Thanks to recent and current
advances, modeling complex data with GPs is becoming more and more feasible.
Thus, these types of models are, nowadays, an interesting alternative to Neural
and Deep Learning methods, which are arguably the current state-of-the-art in
Machine Learning. For the latter, we see an increasing interest in so-called
explainable approaches - in essence methods that aim to make a Machine Learning
model's decision process transparent to humans. Such methods are particularly
needed when illogical or biased reasoning can lead to actual disadvantageous
consequences for humans. Ideally, explainable Machine Learning should help
detect such flaws in a model and aid a subsequent debugging process. One active
line of research in Machine Learning explainability are gradient-based methods,
which have been successfully applied to complex neural networks. Given that GPs
are closed under differentiation, gradient-based explainability for GPs appears
as a promising field of research. This paper is primarily focused on explaining
GP classifiers via gradients where, contrary to GP regression, derivative GPs
are not straightforward to obtain.
Related papers
- Compactly-supported nonstationary kernels for computing exact Gaussian processes on big data [2.8377382540923004]
We derive an alternative kernel that can discover and encode both sparsity and nonstationarity.
We demonstrate the favorable performance of our novel kernel relative to existing exact and approximate GP methods.
We also conduct space-time prediction based on more than one million measurements of daily maximum temperature.
arXiv Detail & Related papers (2024-11-07T20:07:21Z) - Amortized Variational Inference for Deep Gaussian Processes [0.0]
Deep Gaussian processes (DGPs) are multilayer generalizations of Gaussian processes (GPs)
We introduce amortized variational inference for DGPs, which learns an inference function that maps each observation to variational parameters.
Our method performs similarly or better than previous approaches at less computational cost.
arXiv Detail & Related papers (2024-09-18T20:23:27Z) - Wilsonian Renormalization of Neural Network Gaussian Processes [1.8749305679160366]
We demonstrate a practical approach to performing Wilsonian RG in the context of Gaussian Process (GP) Regression.
We systematically integrate out the unlearnable modes of the GP kernel, thereby obtaining an RG flow of the GP in which the data sets the IR scale.
This approach goes beyond structural analogies between RG and neural networks by providing a natural connection between RG flow and learnable vs. unlearnable modes.
arXiv Detail & Related papers (2024-05-09T18:00:00Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Deep Gaussian Processes for Biogeophysical Parameter Retrieval and Model
Inversion [14.097477944789484]
This paper introduces the use of deep Gaussian Processes (DGPs) for bio-geo-physical model inversion.
Unlike shallow GP models, DGPs account for complicated (modular, hierarchical) processes, provide an efficient solution that scales well to big datasets.
arXiv Detail & Related papers (2021-04-16T10:42:01Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z) - Transport Gaussian Processes for Regression [0.22843885788439797]
We propose a methodology to construct processes, which include GPs, warped GPs, Student-t processes and several others.
Our approach is inspired by layers-based models, where each proposed layer changes a specific property over the generated process.
We validate the proposed model through experiments with real-world data.
arXiv Detail & Related papers (2020-01-30T17:44:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.