SurvBeX: An explanation method of the machine learning survival models
based on the Beran estimator
- URL: http://arxiv.org/abs/2308.03730v1
- Date: Mon, 7 Aug 2023 17:18:37 GMT
- Title: SurvBeX: An explanation method of the machine learning survival models
based on the Beran estimator
- Authors: Lev V. Utkin and Danila Y. Eremenko and Andrei V. Konstantinov
- Abstract summary: An explanation method called SurvBeX is proposed to interpret predictions of the machine learning survival black-box models.
Many numerical experiments with synthetic and real survival data demonstrate the SurvBeX efficiency.
- Score: 4.640835690336653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An explanation method called SurvBeX is proposed to interpret predictions of
the machine learning survival black-box models. The main idea behind the method
is to use the modified Beran estimator as the surrogate explanation model.
Coefficients, incorporated into Beran estimator, can be regarded as values of
the feature impacts on the black-box model prediction. Following the well-known
LIME method, many points are generated in a local area around an example of
interest. For every generated example, the survival function of the black-box
model is computed, and the survival function of the surrogate model (the Beran
estimator) is constructed as a function of the explanation coefficients. In
order to find the explanation coefficients, it is proposed to minimize the mean
distance between the survival functions of the black-box model and the Beran
estimator produced by the generated examples. Many numerical experiments with
synthetic and real survival data demonstrate the SurvBeX efficiency and compare
the method with the well-known method SurvLIME. The method is also compared
with the method SurvSHAP. The code implementing SurvBeX is available at:
https://github.com/DanilaEremenko/SurvBeX
Related papers
- Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - SurvSHAP(t): Time-dependent explanations of machine learning survival
models [6.950862982117125]
We introduce SurvSHAP(t), the first time-dependent explanation that allows for interpreting survival black-box models.
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect.
We provide an accessible implementation of time-dependent explanations in Python.
arXiv Detail & Related papers (2022-08-23T17:01:14Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Counterfactual explanation of machine learning survival models [5.482532589225552]
It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints.
For other black-box models, it is proposed to apply the well-known Particle Swarm Optimization algorithm.
arXiv Detail & Related papers (2020-06-26T19:46:47Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - A robust algorithm for explaining unreliable machine learning survival
models using the Kolmogorov-Smirnov bounds [5.482532589225552]
SurvLIME-KS is proposed for explaining machine learning survival models.
It is developed to ensure robustness to cases of a small amount of training data or outliers of survival data.
Various numerical experiments with synthetic and real datasets demonstrate the SurvLIME-KS efficiency.
arXiv Detail & Related papers (2020-05-05T14:47:35Z) - SurvLIME-Inf: A simplified modification of SurvLIME for explanation of
machine learning survival models [4.640835690336653]
The basic idea behind SurvLIME as well as SurvLIME-Inf is to apply the Cox proportional hazards model to approximate the black-box survival model at the local area around a test example.
In contrast to SurvLIME, the proposed modification uses $L_infty $-norm for defining distances between approximating and approximated cumulative hazard functions.
arXiv Detail & Related papers (2020-05-05T14:34:46Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z) - SurvLIME: A method for explaining machine learning survival models [4.640835690336653]
The main idea behind the proposed method is to apply the Cox proportional hazards model to approximate the survival model at the local area around a test example.
A lot of numerical experiments demonstrate the SurvLIME efficiency.
arXiv Detail & Related papers (2020-03-18T17:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.