Multiclass Semantic Segmentation to Identify Anatomical Sub-Regions of
Brain and Measure Neuronal Health in Parkinson's Disease
- URL: http://arxiv.org/abs/2301.02925v1
- Date: Sat, 7 Jan 2023 19:35:28 GMT
- Title: Multiclass Semantic Segmentation to Identify Anatomical Sub-Regions of
Brain and Measure Neuronal Health in Parkinson's Disease
- Authors: Hosein Barzekar, Hai Ngu, Han Hui Lin, Mohsen Hejrati, Steven Ray
Valdespino, Sarah Chu, Baris Bingol, Somaye Hashemifar, Soumitra Ghosh
- Abstract summary: Currently, a machine learning model to analyze sub-anatomical regions of the brain to analyze 2D histological images is not available.
In this study, we trained our best fit model on approximately one thousand annotated 2D brain images stained with Nissl/ Haematoxylin and Tyrosine Hydroxylase enzyme (TH, indicator of dopaminergic neuron viability)
The model effectively is able to detect two sub-regions compacta (SNCD) and reticulata (SNr) in all the images.
- Score: 2.288652563296735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated segmentation of anatomical sub-regions with high precision has
become a necessity to enable the quantification and characterization of cells/
tissues in histology images. Currently, a machine learning model to analyze
sub-anatomical regions of the brain to analyze 2D histological images is not
available. The scientists rely on manually segmenting anatomical sub-regions of
the brain which is extremely time-consuming and prone to labeler-dependent
bias. One of the major challenges in accomplishing such a task is the lack of
high-quality annotated images that can be used to train a generic artificial
intelligence model. In this study, we employed a UNet-based architecture,
compared model performance with various combinations of encoders, image sizes,
and sample selection techniques. Additionally, to increase the sample set we
resorted to data augmentation which provided data diversity and robust
learning. In this study, we trained our best fit model on approximately one
thousand annotated 2D brain images stained with Nissl/ Haematoxylin and
Tyrosine Hydroxylase enzyme (TH, indicator of dopaminergic neuron viability).
The dataset comprises of different animal studies enabling the model to be
trained on different datasets. The model effectively is able to detect two
sub-regions compacta (SNCD) and reticulata (SNr) in all the images. In spite of
limited training data, our best model achieves a mean intersection over union
(IOU) of 79% and a mean dice coefficient of 87%. In conclusion, the UNet-based
model with EffiecientNet as an encoder outperforms all other encoders,
resulting in a first of its kind robust model for multiclass segmentation of
sub-brain regions in 2D images.
Related papers
- Self-supervised Brain Lesion Generation for Effective Data Augmentation of Medical Images [0.9626666671366836]
We propose a framework to efficiently generate new samples for training a brain lesion segmentation model.
We first train a lesion generator, based on an adversarial autoencoder, in a self-supervised manner.
Next, we utilize a novel image composition algorithm, Soft Poisson Blending, to seamlessly combine synthetic lesions and brain images.
arXiv Detail & Related papers (2024-06-21T01:53:12Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - One Model is All You Need: Multi-Task Learning Enables Simultaneous
Histology Image Segmentation and Classification [3.8725005247905386]
We present a multi-task learning approach for segmentation and classification of tissue regions.
We enable simultaneous prediction with a single network.
As a result of feature sharing, we also show that the learned representation can be used to improve downstream tasks.
arXiv Detail & Related papers (2022-02-28T20:22:39Z) - An End-to-End Breast Tumour Classification Model Using Context-Based
Patch Modelling- A BiLSTM Approach for Image Classification [19.594639581421422]
We have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region.
We trained and tested our model on two datasets, microscopy images and WSI tumour regions.
We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network.
arXiv Detail & Related papers (2021-06-05T10:43:58Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.