INSightR-Net: Interpretable Neural Network for Regression using
Similarity-based Comparisons to Prototypical Examples
- URL: http://arxiv.org/abs/2208.00457v1
- Date: Sun, 31 Jul 2022 15:56:15 GMT
- Title: INSightR-Net: Interpretable Neural Network for Regression using
Similarity-based Comparisons to Prototypical Examples
- Authors: Linde S. Hesse and Ana I. L. Namburete
- Abstract summary: Convolutional neural networks (CNNs) have shown exceptional performance for a range of medical imaging tasks.
In this work, we propose an inherently interpretable CNN for regression using similarity-based comparisons.
A prototype layer incorporated into the architecture enables visualization of the areas in the image that are most similar to learned prototypes.
The final prediction is then intuitively modeled as a mean of prototype labels, weighted by the similarities.
- Score: 2.4366811507669124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have shown exceptional performance for a
range of medical imaging tasks. However, conventional CNNs are not able to
explain their reasoning process, therefore limiting their adoption in clinical
practice. In this work, we propose an inherently interpretable CNN for
regression using similarity-based comparisons (INSightR-Net) and demonstrate
our methods on the task of diabetic retinopathy grading. A prototype layer
incorporated into the architecture enables visualization of the areas in the
image that are most similar to learned prototypes. The final prediction is then
intuitively modeled as a mean of prototype labels, weighted by the
similarities. We achieved competitive prediction performance with our
INSightR-Net compared to a ResNet baseline, showing that it is not necessary to
compromise performance for interpretability. Furthermore, we quantified the
quality of our explanations using sparsity and diversity, two concepts
considered important for a good explanation, and demonstrated the effect of
several parameters on the latent space embeddings.
Related papers
- Learning local discrete features in explainable-by-design convolutional neural networks [0.0]
We introduce an explainable-by-design convolutional neural network (CNN) based on the lateral inhibition mechanism.
The model consists of the predictor, that is a high-accuracy CNN with residual or dense skip connections.
By collecting observations and directly calculating probabilities, we can explain causal relationships between motifs of adjacent levels.
arXiv Detail & Related papers (2024-10-31T18:39:41Z) - Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models [0.0]
We employ two structurally different and complementary DNN-based models to classify individual cognitive states.
We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance.
arXiv Detail & Related papers (2024-08-14T15:25:51Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic
Networks [25.465917853812538]
We present an empirical evaluation on methods for sharing parameters in isotropic networks.
We propose a weight sharing strategy to generate a family of models with better overall efficiency.
arXiv Detail & Related papers (2022-07-21T00:16:05Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Investigation of REFINED CNN ensemble learning for anti-cancer drug
sensitivity prediction [0.0]
Anti-cancer drug sensitivity prediction using deep learning models for individual cell line is a significant challenge in personalized medicine.
REFINED CNN (Convolutional Neural Network) based models have shown promising results in drug sensitivity prediction.
We consider predictions based on ensembles built from such mappings that can improve upon the best single REFINED CNN model prediction.
arXiv Detail & Related papers (2020-09-09T02:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.