Analyzing the Effect of $k$-Space Features in MRI Classification Models
- URL: http://arxiv.org/abs/2409.13589v1
- Date: Fri, 20 Sep 2024 15:43:26 GMT
- Title: Analyzing the Effect of $k$-Space Features in MRI Classification Models
- Authors: Pascal Passigan, Vayd Ramkumar,
- Abstract summary: We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The integration of Artificial Intelligence (AI) in medical diagnostics is often hindered by model opacity, where high-accuracy systems function as "black boxes" without transparent reasoning. This limitation is critical in clinical settings, where trust and reliability are paramount. To address this, we have developed an explainable AI methodology tailored for medical imaging. By employing a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains, we introduce a novel approach that incorporates Uniform Manifold Approximation and Projection UMAP] for the visualization of latent input embeddings. This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions, thereby increasing interpretability and supporting more accurate and intuitive diagnostic inferences
Related papers
- Explainable Diagnosis Prediction through Neuro-Symbolic Integration [11.842565087408449]
We use neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction.
Our models, particularly $M_textmulti-pathway$ and $M_textcomprehensive$, demonstrate superior performance over traditional models.
These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications.
arXiv Detail & Related papers (2024-10-01T22:47:24Z) - Explainable Transformer Prototypes for Medical Diagnoses [7.680878119988482]
Self-attention feature of transformers contributes towards identifying crucial regions during the classification process.
Our research endeavors to innovate a unique attention block that underscores the correlation between'regions' rather than 'pixels'
A combined quantitative and qualitative methodological approach was used to demonstrate the effectiveness of the proposed method on a large-scale NIH chest X-ray dataset.
arXiv Detail & Related papers (2024-03-11T17:46:21Z) - ContrastDiagnosis: Enhancing Interpretability in Lung Nodule Diagnosis
Using Contrastive Learning [23.541034347602935]
Clinicians' distrust of black box models has hindered the clinical deployment of AI products.
We propose ContrastDiagnosis, a straightforward yet effective interpretable diagnosis framework.
High diagnostic accuracy was achieved with AUC of 0.977 while maintain a high transparency and explainability.
arXiv Detail & Related papers (2024-03-08T13:00:52Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Improving Interpretability of Deep Neural Networks in Medical Diagnosis
by Investigating the Individual Units [24.761080054980713]
We demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image.
Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
arXiv Detail & Related papers (2021-07-19T11:49:31Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Learning Interpretable Microscopic Features of Tumor by Multi-task
Adversarial CNNs To Improve Generalization [1.7371375427784381]
Existing CNN models act as black boxes, not ensuring to the physicians that important diagnostic features are used by the model.
Here we show that our architecture, by learning end-to-end an uncertainty-based weighting combination of multi-task and adversarial losses, is encouraged to focus on pathology features.
Our results on breast lymph node tissue show significantly improved generalization in the detection of tumorous tissue, with best average AUC 0.89 (0.01) against the baseline AUC 0.86 (0.005)
arXiv Detail & Related papers (2020-08-04T12:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.