Feature robustness and sex differences in medical imaging: a case study
in MRI-based Alzheimer's disease detection
- URL: http://arxiv.org/abs/2204.01737v1
- Date: Mon, 4 Apr 2022 17:37:54 GMT
- Title: Feature robustness and sex differences in medical imaging: a case study
in MRI-based Alzheimer's disease detection
- Authors: Eike Petersen and Aasa Feragen and Luise da Costa Zemsch and Anders
Henriksen and Oskar Eiler Wiese Christensen and Melanie Ganz
- Abstract summary: We compare two classification schemes on the ADNI MRI dataset.
We do not find a strong dependence of model performance for male and female test subjects on the sex composition of the training dataset.
- Score: 1.7616042687330637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks have enabled significant improvements in
medical image-based disease classification. It has, however, become
increasingly clear that these models are susceptible to performance degradation
due to spurious correlations and dataset shifts, which may lead to
underperformance on underrepresented patient groups, among other problems. In
this paper, we compare two classification schemes on the ADNI MRI dataset: a
very simple logistic regression model that uses manually selected volumetric
features as inputs, and a convolutional neural network trained on 3D MRI data.
We assess the robustness of the trained models in the face of varying dataset
splits, training set sex composition, and stage of disease. In contrast to
earlier work on diagnosing lung diseases based on chest x-ray data, we do not
find a strong dependence of model performance for male and female test subjects
on the sex composition of the training dataset. Moreover, in our analysis, the
low-dimensional model with manually selected features outperforms the 3D CNN,
thus emphasizing the need for automatic robust feature extraction methods and
the value of manual feature specification (based on prior knowledge) for
robustness.
Related papers
- Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset [11.173478552040441]
Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain.
In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification.
arXiv Detail & Related papers (2024-06-20T11:26:32Z) - Few-shot learning for COVID-19 Chest X-Ray Classification with
Imbalanced Data: An Inter vs. Intra Domain Study [49.5374512525016]
Medical image datasets are essential for training models used in computer-aided diagnosis, treatment planning, and medical research.
Some challenges are associated with these datasets, including variability in data distribution, data scarcity, and transfer learning issues when using models pre-trained from generic images.
We propose a methodology based on Siamese neural networks in which a series of techniques are integrated to mitigate the effects of data scarcity and distribution imbalance.
arXiv Detail & Related papers (2024-01-18T16:59:27Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Efficiently Training Vision Transformers on Structural MRI Scans for
Alzheimer's Disease Detection [2.359557447960552]
Vision transformers (ViT) have emerged in recent years as an alternative to CNNs for several computer vision applications.
We tested variants of the ViT architecture for a range of desired neuroimaging downstream tasks based on difficulty.
We achieved a performance boost of 5% and 9-10% upon fine-tuning vision transformer models pre-trained on synthetic and real MRI scans.
arXiv Detail & Related papers (2023-03-14T20:18:12Z) - Self-Supervised Mental Disorder Classifiers via Time Reversal [0.0]
We demonstrate that a model trained on the time direction of functional neuro-imaging data could help in any downstream task.
We train a Deep Neural Network on Independent components derived from fMRI data using the Independent component analysis (ICA) technique.
We show that learning time direction helps a model learn some causal relation in fMRI data that helps in faster convergence.
arXiv Detail & Related papers (2022-11-29T17:24:43Z) - DeepAD: A Robust Deep Learning Model of Alzheimer's Disease Progression
for Real-World Clinical Applications [0.9999629695552196]
We propose a novel multi-task deep learning model to predict Alzheimer's disease progression.
Our model integrates high dimensional MRI features from a 3D convolutional neural network with other data modalities.
arXiv Detail & Related papers (2022-03-17T05:42:00Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.