Improving a neural network model by explanation-guided training for
glioma classification based on MRI data
- URL: http://arxiv.org/abs/2107.02008v2
- Date: Sun, 16 Apr 2023 20:17:22 GMT
- Title: Improving a neural network model by explanation-guided training for
glioma classification based on MRI data
- Authors: Frantisek Sefcik, Wanda Benesova
- Abstract summary: Interpretability methods have become a popular way to gain insight into the decision-making process of deep learning models.
We propose a method for explanation-guided training that uses a Layer-wise relevance propagation (LRP) technique.
We experimentally verified our method on a convolutional neural network (CNN) model for low-grade and high-grade glioma classification problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, artificial intelligence (AI) systems have come to the
forefront. These systems, mostly based on Deep learning (DL), achieve excellent
results in areas such as image processing, natural language processing, or
speech recognition. Despite the statistically high accuracy of deep learning
models, their output is often a decision of "black box". Thus, Interpretability
methods have become a popular way to gain insight into the decision-making
process of deep learning models. Explanation of a deep learning model is
desirable in the medical domain since the experts have to justify their
judgments to the patient. In this work, we proposed a method for
explanation-guided training that uses a Layer-wise relevance propagation (LRP)
technique to force the model to focus only on the relevant part of the image.
We experimentally verified our method on a convolutional neural network (CNN)
model for low-grade and high-grade glioma classification problems. Our
experiments show promising results in a way to use interpretation techniques in
the model training process.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Image classification network enhancement methods based on knowledge
injection [8.885876832491917]
This paper proposes a multi-level hierarchical deep learning algorithm.
It is composed of multi-level hierarchical deep neural network architecture and multi-level hierarchical deep learning framework.
The experimental results show that the proposed algorithm can effectively explain the hidden information of the neural network.
arXiv Detail & Related papers (2024-01-09T09:11:41Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Multi-Semantic Image Recognition Model and Evaluating Index for
explaining the deep learning models [31.387124252490377]
We first propose a multi-semantic image recognition model, which enables human beings to understand the decision-making process of the neural network.
We then presents a new evaluation index, which can quantitatively assess the model interpretability.
This paper also exhibits the relevant baseline performance with current state-of-the-art deep learning models.
arXiv Detail & Related papers (2021-09-28T07:18:05Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.