Counterfactual explanation of machine learning survival models
- URL: http://arxiv.org/abs/2006.16793v1
- Date: Fri, 26 Jun 2020 19:46:47 GMT
- Title: Counterfactual explanation of machine learning survival models
- Authors: Maxim S. Kovalev and Lev V. Utkin
- Abstract summary: It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints.
For other black-box models, it is proposed to apply the well-known Particle Swarm Optimization algorithm.
- Score: 5.482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A method for counterfactual explanation of machine learning survival models
is proposed. One of the difficulties of solving the counterfactual explanation
problem is that the classes of examples are implicitly defined through outcomes
of a machine learning survival model in the form of survival functions. A
condition that establishes the difference between survival functions of the
original example and the counterfactual is introduced. This condition is based
on using a distance between mean times to event. It is shown that the
counterfactual explanation problem can be reduced to a standard convex
optimization problem with linear constraints when the explained black-box model
is the Cox model. For other black-box models, it is proposed to apply the
well-known Particle Swarm Optimization algorithm. A lot of numerical
experiments with real and synthetic data demonstrate the proposed method.
Related papers
- Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - SurvSHAP(t): Time-dependent explanations of machine learning survival
models [6.950862982117125]
We introduce SurvSHAP(t), the first time-dependent explanation that allows for interpreting survival black-box models.
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect.
We provide an accessible implementation of time-dependent explanations in Python.
arXiv Detail & Related papers (2022-08-23T17:01:14Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Dependency Decomposition and a Reject Option for Explainable Models [4.94950858749529]
Recent deep learning models perform extremely well in various inference tasks.
Recent advances offer methods to visualize features, describe attribution of the input.
We present the first analysis of dependencies regarding the probability distribution over the desired image classification outputs.
arXiv Detail & Related papers (2020-12-11T17:39:33Z) - MeLIME: Meaningful Local Explanation for Machine Learning Models [2.819725769698229]
We show that our approach, MeLIME, produces more meaningful explanations compared to other techniques over different ML models.
MeLIME generalizes the LIME method, allowing more flexible perturbation sampling and the use of different local interpretable models.
arXiv Detail & Related papers (2020-09-12T16:06:58Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z) - SurvLIME: A method for explaining machine learning survival models [4.640835690336653]
The main idea behind the proposed method is to apply the Cox proportional hazards model to approximate the survival model at the local area around a test example.
A lot of numerical experiments demonstrate the SurvLIME efficiency.
arXiv Detail & Related papers (2020-03-18T17:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.