An Investigation of Interpretability Techniques for Deep Learning in
Predictive Process Analytics
- URL: http://arxiv.org/abs/2002.09192v1
- Date: Fri, 21 Feb 2020 09:14:34 GMT
- Title: An Investigation of Interpretability Techniques for Deep Learning in
Predictive Process Analytics
- Authors: Catarina Moreira and Renuka Sindhgatta and Chun Ouyang and Peter Bruza
and Andreas Wichert
- Abstract summary: This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests.
We learn models that try to predict the type of cancer of the patient, given their set of medical activity records.
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
- Score: 2.162419921663162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores interpretability techniques for two of the most
successful learning algorithms in medical decision-making literature: deep
neural networks and random forests. We applied these algorithms in a real-world
medical dataset containing information about patients with cancer, where we
learn models that try to predict the type of cancer of the patient, given their
set of medical activity records.
We explored different algorithms based on neural network architectures using
long short term deep neural networks, and random forests. Since there is a
growing need to provide decision-makers understandings about the logic of
predictions of black boxes, we also explored different techniques that provide
interpretations for these classifiers. In one of the techniques, we intercepted
some hidden layers of these neural networks and used autoencoders in order to
learn what is the representation of the input in the hidden layers. In another,
we investigated an interpretable model locally around the random forest's
prediction.
Results show learning an interpretable model locally around the model's
prediction leads to a higher understanding of why the algorithm is making some
decision. Use of local and linear model helps identify the features used in
prediction of a specific instance or data point. We see certain distinct
features used for predictions that provide useful insights about the type of
cancer, along with features that do not generalize well. In addition, the
structured deep learning approach using autoencoders provided meaningful
prediction insights, which resulted in the identification of nonlinear clusters
correspondent to the patients' different types of cancer.
Related papers
- Improving Cancer Imaging Diagnosis with Bayesian Networks and Deep Learning: A Bayesian Deep Learning Approach [0.0]
This article aims to investigate the theory behind Deep Learning and Bayesian Network prediction models.
The applications and accuracy of the resulting Bayesian Deep Learning approach in the health industry in classifying images will be analyzed.
arXiv Detail & Related papers (2024-03-28T01:27:10Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep learning methods for drug response prediction in cancer:
predominant and emerging trends [50.281853616905416]
Exploiting computational predictive models to study and treat cancer holds great promise in improving drug development and personalized design of treatment plans.
A wave of recent papers demonstrates promising results in predicting cancer response to drug treatments while utilizing deep learning methods.
This review allows to better understand the current state of the field and identify major challenges and promising solution paths.
arXiv Detail & Related papers (2022-11-18T03:26:31Z) - RandomSCM: interpretable ensembles of sparse classifiers tailored for
omics data [59.4141628321618]
We propose an ensemble learning algorithm based on conjunctions or disjunctions of decision rules.
The interpretability of the models makes them useful for biomarker discovery and patterns discovery in high dimensional data.
arXiv Detail & Related papers (2022-08-11T13:55:04Z) - Visual Interpretable and Explainable Deep Learning Models for Brain
Tumor MRI and COVID-19 Chest X-ray Images [0.0]
We evaluate attribution methods for illuminating how deep neural networks analyze medical images.
We attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.
arXiv Detail & Related papers (2022-08-01T16:05:14Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Convolutional Motif Kernel Networks [1.104960878651584]
We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
Our proposed method can be utilized on DNA and protein sequences.
arXiv Detail & Related papers (2021-11-03T15:06:09Z) - Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning [20.665935997959025]
We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
arXiv Detail & Related papers (2021-07-12T17:42:09Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Predictions of Deep Neural Classifier via Activation Analysis [0.11470070927586014]
We present a novel approach to explain and support an interpretation of the decision-making process to a human expert operating a deep learning system based on Convolutional Neural Network (CNN)
Our results indicate that our method is capable of detecting distinct prediction strategies that enable us to identify the most similar predictions from an existing atlas.
arXiv Detail & Related papers (2020-12-03T20:36:19Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.