Implementation of a Modified U-Net for Medical Image Segmentation on
Edge Devices
- URL: http://arxiv.org/abs/2206.02358v1
- Date: Mon, 6 Jun 2022 05:25:19 GMT
- Title: Implementation of a Modified U-Net for Medical Image Segmentation on
Edge Devices
- Authors: Owais Ali, Hazrat Ali, Syed Ayaz Ali Shah, Aamir Shahzad
- Abstract summary: We present the implementation of Modified U-Net on Intel Movidius Neural Compute Stick 2 (NCS-2) for the segmentation of medical images.
Experiments are reported for segmentation task on three medical imaging datasets: BraTs dataset of brain MRI, heart MRI dataset, and ZNSDB dataset.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning techniques, particularly convolutional neural networks, have
shown great potential in computer vision and medical imaging applications.
However, deep learning models are computationally demanding as they require
enormous computational power and specialized processing hardware for model
training. To make these models portable and compatible for prototyping, their
implementation on low-power devices is imperative. In this work, we present the
implementation of Modified U-Net on Intel Movidius Neural Compute Stick 2
(NCS-2) for the segmentation of medical images. We selected U-Net because, in
medical image segmentation, U-Net is a prominent model that provides improved
performance for medical image segmentation even if the dataset size is small.
The modified U-Net model is evaluated for performance in terms of dice score.
Experiments are reported for segmentation task on three medical imaging
datasets: BraTs dataset of brain MRI, heart MRI dataset, and Ziehl-Neelsen
sputum smear microscopy image (ZNSDB) dataset. For the proposed model, we
reduced the number of parameters from 30 million in the U-Net model to 0.49
million in the proposed architecture. Experimental results show that the
modified U-Net provides comparable performance while requiring significantly
lower resources and provides inference on the NCS-2. The maximum dice scores
recorded are 0.96 for the BraTs dataset, 0.94 for the heart MRI dataset, and
0.74 for the ZNSDB dataset.
Related papers
- Residual Vision Transformer (ResViT) Based Self-Supervised Learning Model for Brain Tumor Classification [0.08192907805418585]
Self-supervised learning models provide data-efficient and remarkable solutions to limited dataset problems.
This paper introduces a generative SSL model for brain tumor classification in two stages.
The proposed model attains the highest accuracy, achieving 90.56% on the BraTs dataset with T1 sequence, 98.53% on the Figshare, and 98.47% on the Kaggle brain tumor datasets.
arXiv Detail & Related papers (2024-11-19T21:42:57Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Comparative Analysis of ImageNet Pre-Trained Deep Learning Models and
DINOv2 in Medical Imaging Classification [7.205610366609243]
In this paper, we performed a glioma grading task using three clinical modalities of brain MRI data.
We compared the performance of various pre-trained deep learning models, including those based on ImageNet and DINOv2.
Our findings indicate that in our clinical dataset, DINOv2's performance was not as strong as ImageNet-based pre-trained models.
arXiv Detail & Related papers (2024-02-12T11:49:08Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal
Biomedical Image Real-Time Segmentation [0.0]
We developed a novel light-weight architecture -- Channel-wise Feature Pyramid Network for Medicine.
It achieves comparable segmentation results on all five medical datasets with only 0.65 million parameters, which is about 2% of U-Net, and 8.8 MB memory.
arXiv Detail & Related papers (2021-05-10T02:29:11Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - 3D U-Net for segmentation of COVID-19 associated pulmonary infiltrates
using transfer learning: State-of-the-art results on affordable hardware [0.0]
pulmonary infiltrates can help assess severity of COVID-19, but manual segmentation is labor and time-intensive.
Using neural networks to segment pulmonary infiltrates would enable automation of this task.
We developed and tested a solution on how transfer learning can be used to train state-of-the-art segmentation models on limited hardware and in shorter time.
arXiv Detail & Related papers (2021-01-25T09:37:32Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - U-Net Based Architecture for an Improved Multiresolution Segmentation in
Medical Images [0.0]
We have proposed a fully convolutional neural network for image segmentation in a multi-resolution framework.
In the proposed architecture (mrU-Net), the input image and its down-sampled versions were used as the network inputs.
We trained and tested the network on four different medical datasets.
arXiv Detail & Related papers (2020-07-16T10:19:01Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - A Data and Compute Efficient Design for Limited-Resources Deep Learning [68.55415606184]
equivariant neural networks have gained increased interest in the deep learning community.
They have been successfully applied in the medical domain where symmetries in the data can be effectively exploited to build more accurate and robust models.
Mobile, on-device implementations of deep learning solutions have been developed for medical applications.
However, equivariant models are commonly implemented using large and computationally expensive architectures, not suitable to run on mobile devices.
In this work, we design and test an equivariant version of MobileNetV2 and further optimize it with model quantization to enable more efficient inference.
arXiv Detail & Related papers (2020-04-21T00:49:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.