Gradient-based explanations for Gaussian Process regression and
classification models
- URL: http://arxiv.org/abs/2205.12797v1
- Date: Wed, 25 May 2022 14:11:00 GMT
- Title: Gradient-based explanations for Gaussian Process regression and
classification models
- Authors: Sarem Seitz
- Abstract summary: Gaussian Processes (GPs) have proven themselves as a reliable and effective method in probabilistic Machine Learning.
Thanks to recent and current advances, modeling complex data with GPs is becoming more and more feasible.
We see an increasing interest in so-called explainable approaches - methods that aim to make a Machine Learning model's decision process transparent to humans.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Processes (GPs) have proven themselves as a reliable and effective
method in probabilistic Machine Learning. Thanks to recent and current
advances, modeling complex data with GPs is becoming more and more feasible.
Thus, these types of models are, nowadays, an interesting alternative to Neural
and Deep Learning methods, which are arguably the current state-of-the-art in
Machine Learning. For the latter, we see an increasing interest in so-called
explainable approaches - in essence methods that aim to make a Machine Learning
model's decision process transparent to humans. Such methods are particularly
needed when illogical or biased reasoning can lead to actual disadvantageous
consequences for humans. Ideally, explainable Machine Learning should help
detect such flaws in a model and aid a subsequent debugging process. One active
line of research in Machine Learning explainability are gradient-based methods,
which have been successfully applied to complex neural networks. Given that GPs
are closed under differentiation, gradient-based explainability for GPs appears
as a promising field of research. This paper is primarily focused on explaining
GP classifiers via gradients where, contrary to GP regression, derivative GPs
are not straightforward to obtain.
Related papers
- Data-Driven Abstractions via Binary-Tree Gaussian Processes for Formal Verification [0.22499166814992438]
abstraction-based solutions based on Gaussian process (GP) regression have become popular for their ability to learn a representation of the latent system from data with a quantified error.
We show that the binary-tree Gaussian process (BTGP) allows us to construct an interval Markov chain model of the unknown system.
We provide a delocalized error quantification via a unified formula even when the true dynamics do not live in the function space of the BTGP.
arXiv Detail & Related papers (2024-07-15T11:49:44Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains [7.936841911281107]
We propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat'ern kernels and conjugate variational inference (CVI)
We are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods.
arXiv Detail & Related papers (2023-06-01T16:31:36Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Deep Gaussian Processes for Biogeophysical Parameter Retrieval and Model
Inversion [14.097477944789484]
This paper introduces the use of deep Gaussian Processes (DGPs) for bio-geo-physical model inversion.
Unlike shallow GP models, DGPs account for complicated (modular, hierarchical) processes, provide an efficient solution that scales well to big datasets.
arXiv Detail & Related papers (2021-04-16T10:42:01Z) - GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning [23.83961717568121]
GP-Tree is a novel method for multi-class classification with Gaussian processes and deep kernel learning.
We develop a tree-based hierarchical model in which each internal node fits a GP to the data.
Our method scales well with both the number of classes and data size.
arXiv Detail & Related papers (2021-02-15T22:16:27Z) - Applying Genetic Programming to Improve Interpretability in Machine
Learning Models [0.3908287552267639]
We propose a Genetic Programming (GP) based approach, named Genetic Programming Explainer (GPX)
The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample.
Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art.
arXiv Detail & Related papers (2020-05-18T16:09:49Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.