SurvLIMEpy: A Python package implementing SurvLIME
- URL: http://arxiv.org/abs/2302.10571v1
- Date: Tue, 21 Feb 2023 09:54:32 GMT
- Title: SurvLIMEpy: A Python package implementing SurvLIME
- Authors: Cristian Pach\'on-Garc\'ia, Carlos Hern\'andez-P\'erez, Pedro
Delicado, Ver\'onica Vilaplana
- Abstract summary: We present SurvLIMEpy, an open-source Python package that implements the SurvLIME algorithm.
The package supports a wide variety of survival models, from the Cox Proportional Hazards Model to deep learning models such as DeepHit or DeepSurv.
- Score: 1.0689187493307983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we present SurvLIMEpy, an open-source Python package that
implements the SurvLIME algorithm. This method allows to compute local feature
importance for machine learning algorithms designed for modelling Survival
Analysis data. Our implementation takes advantage of the parallelisation
paradigm as all computations are performed in a matrix-wise fashion which
speeds up execution time. Additionally, SurvLIMEpy assists the user with
visualization tools to better understand the result of the algorithm. The
package supports a wide variety of survival models, from the Cox Proportional
Hazards Model to deep learning models such as DeepHit or DeepSurv. Two types of
experiments are presented in this paper. First, by means of simulated data, we
study the ability of the algorithm to capture the importance of the features.
Second, we use three open source survival datasets together with a set of
survival algorithms in order to demonstrate how SurvLIMEpy behaves when applied
to different models.
Related papers
- Simulation-based inference with the Python Package sbijax [0.7499722271664147]
sbijax is a Python package that implements a wide variety of state-of-the-art methods in neural simulation-based inference.
The package provides functionality for approximate Bayesian computation, to compute model diagnostics, and to automatically estimate summary statistics.
arXiv Detail & Related papers (2024-09-28T18:47:13Z) - Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control [66.78146440275093]
Learned retrieval (LSR) is a family of neural methods that encode queries and documents into sparse lexical vectors.
We explore the application of LSR to the multi-modal domain, with a focus on text-image retrieval.
Current approaches like LexLIP and STAIR require complex multi-step training on massive datasets.
Our proposed approach efficiently transforms dense vectors from a frozen dense model into sparse lexical vectors.
arXiv Detail & Related papers (2024-02-27T14:21:56Z) - A Comprehensive Python Library for Deep Learning-Based Event Detection
in Multivariate Time Series Data and Information Retrieval in NLP [0.0]
We present a new deep learning supervised method for detecting events in time series data.
It is based on regression instead of binary classification.
It does not require labeled datasets where each point is labeled.
It only requires reference events defined as time points or intervals of time.
arXiv Detail & Related papers (2023-10-25T09:13:19Z) - ProbVLM: Probabilistic Adapter for Frozen Vision-Language Models [69.50316788263433]
We propose ProbVLM, a probabilistic adapter that estimates probability distributions for the embeddings of pre-trained vision-language models.
We quantify the calibration of embedding uncertainties in retrieval tasks and show that ProbVLM outperforms other methods.
We present a novel technique for visualizing the embedding distributions using a large-scale pre-trained latent diffusion model.
arXiv Detail & Related papers (2023-07-01T18:16:06Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - CodeGen2: Lessons for Training LLMs on Programming and Natural Languages [116.74407069443895]
We unify encoder and decoder-based models into a single prefix-LM.
For learning methods, we explore the claim of a "free lunch" hypothesis.
For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored.
arXiv Detail & Related papers (2023-05-03T17:55:25Z) - Multi-Task Learning for Sparsity Pattern Heterogeneity: Statistical and Computational Perspectives [10.514866749547558]
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets.
A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks.
Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar.
This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks.
arXiv Detail & Related papers (2022-12-16T19:52:25Z) - SurvSHAP(t): Time-dependent explanations of machine learning survival
models [6.950862982117125]
We introduce SurvSHAP(t), the first time-dependent explanation that allows for interpreting survival black-box models.
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect.
We provide an accessible implementation of time-dependent explanations in Python.
arXiv Detail & Related papers (2022-08-23T17:01:14Z) - DADApy: Distance-based Analysis of DAta-manifolds in Python [51.37841707191944]
DADApy is a python software package for analysing and characterising high-dimensional data.
It provides methods for estimating the intrinsic dimension and the probability density, for performing density-based clustering and for comparing different distance metrics.
arXiv Detail & Related papers (2022-05-04T08:41:59Z) - Landscape of R packages for eXplainable Artificial Intelligence [4.91155110560629]
The article is primarily devoted to the tools available in R, but since it is easy to integrate the Python code, we will also show examples for the most popular libraries from Python.
arXiv Detail & Related papers (2020-09-24T16:54:57Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.