SurvLIME: A method for explaining machine learning survival models
- URL: http://arxiv.org/abs/2003.08371v1
- Date: Wed, 18 Mar 2020 17:48:42 GMT
- Title: SurvLIME: A method for explaining machine learning survival models
- Authors: Maxim S. Kovalev, Lev V. Utkin, Ernest M. Kasimov
- Abstract summary: The main idea behind the proposed method is to apply the Cox proportional hazards model to approximate the survival model at the local area around a test example.
A lot of numerical experiments demonstrate the SurvLIME efficiency.
- Score: 4.640835690336653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new method called SurvLIME for explaining machine learning survival models
is proposed. It can be viewed as an extension or modification of the well-known
method LIME. The main idea behind the proposed method is to apply the Cox
proportional hazards model to approximate the survival model at the local area
around a test example. The Cox model is used because it considers a linear
combination of the example covariates such that coefficients of the covariates
can be regarded as quantitative impacts on the prediction. Another idea is to
approximate cumulative hazard functions of the explained model and the Cox
model by using a set of perturbed points in a local area around the point of
interest. The method is reduced to solving an unconstrained convex optimization
problem. A lot of numerical experiments demonstrate the SurvLIME efficiency.
Related papers
- SurvBeX: An explanation method of the machine learning survival models
based on the Beran estimator [4.640835690336653]
An explanation method called SurvBeX is proposed to interpret predictions of the machine learning survival black-box models.
Many numerical experiments with synthetic and real survival data demonstrate the SurvBeX efficiency.
arXiv Detail & Related papers (2023-08-07T17:18:37Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Variable selection for nonlinear Cox regression model via deep learning [0.0]
We extend the recently developed deep learning-based variable selection model LassoNet to survival data.
We apply the proposed methodology to analyze a real data set on diffuse large B-cell lymphoma.
arXiv Detail & Related papers (2022-11-17T01:17:54Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Residual Overfit Method of Exploration [78.07532520582313]
We propose an approximate exploration methodology based on fitting only two point estimates, one tuned and one overfit.
The approach drives exploration towards actions where the overfit model exhibits the most overfitting compared to the tuned model.
We compare ROME against a set of established contextual bandit methods on three datasets and find it to be one of the best performing.
arXiv Detail & Related papers (2021-10-06T17:05:33Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Counterfactual explanation of machine learning survival models [5.482532589225552]
It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints.
For other black-box models, it is proposed to apply the well-known Particle Swarm Optimization algorithm.
arXiv Detail & Related papers (2020-06-26T19:46:47Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - A robust algorithm for explaining unreliable machine learning survival
models using the Kolmogorov-Smirnov bounds [5.482532589225552]
SurvLIME-KS is proposed for explaining machine learning survival models.
It is developed to ensure robustness to cases of a small amount of training data or outliers of survival data.
Various numerical experiments with synthetic and real datasets demonstrate the SurvLIME-KS efficiency.
arXiv Detail & Related papers (2020-05-05T14:47:35Z) - SurvLIME-Inf: A simplified modification of SurvLIME for explanation of
machine learning survival models [4.640835690336653]
The basic idea behind SurvLIME as well as SurvLIME-Inf is to apply the Cox proportional hazards model to approximate the black-box survival model at the local area around a test example.
In contrast to SurvLIME, the proposed modification uses $L_infty $-norm for defining distances between approximating and approximated cumulative hazard functions.
arXiv Detail & Related papers (2020-05-05T14:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.