Interpretable Factorization for Neural Network ECG Models
- URL: http://arxiv.org/abs/2006.15189v1
- Date: Fri, 26 Jun 2020 19:32:05 GMT
- Title: Interpretable Factorization for Neural Network ECG Models
- Authors: Christopher Snyder and Sriram Vishwanath
- Abstract summary: We show how to factor a Deep Neural Network into a hierarchical equation consisting of black box variables.
We demonstrate this choice yields interpretable component models identified with visual composite sketches of ECG samples.
- Score: 10.223907995092835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of deep learning (DL) to improve the practice of medicine and its
clinical outcomes faces a looming obstacle: model interpretation. Without
description of how outputs are generated, a collaborating physician can neither
resolve when the model's conclusions are in conflict with his or her own, nor
learn to anticipate model behavior. Current research aims to interpret networks
that diagnose ECG recordings, which has great potential impact as recordings
become more personalized and widely deployed. A generalizable impact beyond
ECGs lies in the ability to provide a rich test-bed for the development of
interpretive techniques in medicine. Interpretive techniques for Deep Neural
Networks (DNNs), however, tend to be heuristic and observational in nature,
lacking the mathematical rigor one might expect in the analysis of math
equations. The motivation of this paper is to offer a third option, a
scientific approach. We treat the model output itself as a phenomenon to be
explained through component parts and equations governing their behavior. We
argue that these component parts should also be "black boxes" --additional
targets to interpret heuristically with clear functional connection to the
original. We show how to rigorously factor a DNN into a hierarchical equation
consisting of black box variables. This is not a subdivision into physical
parts, like an organism into its cells; it is but one choice of an equation
into a collection of abstract functions. Yet, for DNNs trained to identify
normal ECG waveforms on PhysioNet 2017 Challenge data, we demonstrate this
choice yields interpretable component models identified with visual composite
sketches of ECG samples in corresponding input regions. Moreover, the recursion
distills this interpretation: additional factorization of component black boxes
corresponds to ECG partitions that are more morphologically pure.
Related papers
- Do Graph Neural Networks Work for High Entropy Alloys? [12.002942104379986]
High-entropy alloys (HEAs) lack chemical long-range order, limiting the applicability of current graph representations.
We introduce the LESets machine learning model, an accurate, interpretable GNN for HEA property prediction.
We demonstrate the accuracy of LESets in modeling the mechanical properties ofquaternary HEAs.
arXiv Detail & Related papers (2024-08-29T08:20:02Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Interpretable Convolutional Neural Networks for Subject-Independent
Motor Imagery Classification [22.488536453952964]
We propose an explainable deep learning model for brain computer interface (BCI) study.
Specifically, we aim to classify EEG signal which is obtained from the motor-imagery (MI) task.
We visualized the heatmap which indicates the output of the LRP in form of topography to certify neuro-physiological factors.
arXiv Detail & Related papers (2021-12-14T07:35:52Z) - Convolutional Motif Kernel Networks [1.104960878651584]
We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks.
Our proposed method can be utilized on DNA and protein sequences.
arXiv Detail & Related papers (2021-11-03T15:06:09Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction [8.152884957975354]
We propose a novel framework for image-based classification based on a variational autoencoder (VAE)
The VAE disentangles the latent space based on explanations' drawn from existing clinical knowledge.
We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images.
arXiv Detail & Related papers (2020-06-24T15:35:47Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.