Automatic classification of prostate MR series type using image content and metadata
- URL: http://arxiv.org/abs/2404.10892v2
- Date: Wed, 31 Jul 2024 15:18:40 GMT
- Title: Automatic classification of prostate MR series type using image content and metadata
- Authors: Deepa Krishnaswamy, Bálint Kovács, Stefan Denner, Steve Pieper, David Clunie, Christopher P. Bridge, Tina Kapur, Klaus H. Maier-Hein, Andrey Fedorov,
- Abstract summary: We propose a deep-learning method for classification of prostate cancer scanning sequences based on a combination of image data and DICOM metadata.
We demonstrate superior results compared to metadata or image data alone.
- Score: 1.0959281779554237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the wealth of medical image data, efficient curation is essential. Assigning the sequence type to magnetic resonance images is necessary for scientific studies and artificial intelligence-based analysis. However, incomplete or missing metadata prevents effective automation. We therefore propose a deep-learning method for classification of prostate cancer scanning sequences based on a combination of image data and DICOM metadata. We demonstrate superior results compared to metadata or image data alone, and make our code publicly available at https://github.com/deepakri201/DICOMScanClassification.
Related papers
- Efficient Medical Image Retrieval Using DenseNet and FAISS for BIRADS Classification [0.0]
We propose an approach to medical image retrieval using DenseNet and FAISS.
DenseNet is well-suited for feature extraction in complex medical images.
FAISS enables efficient handling of high-dimensional data in large-scale datasets.
arXiv Detail & Related papers (2024-11-03T08:14:31Z) - CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction [62.61209705638161]
There has been growing interest in deep learning-based CMR imaging algorithms.
Deep learning methods require large training datasets.
This dataset includes multi-contrast, multi-view, multi-slice and multi-coil CMR imaging data from 300 subjects.
arXiv Detail & Related papers (2023-09-19T15:14:42Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Generating and Weighting Semantically Consistent Sample Pairs for
Ultrasound Contrastive Learning [10.631361618707214]
Well-annotated medical datasets enable deep neural networks (DNNs) to gain strong power in extracting lesion-related features.
Model pre-training based on ImageNet is a common practice to gain better generalization when the data amount is limited.
In this work, we pre-trains on ultrasound (US) domains instead of ImageNet to reduce the domain gap in medical US applications.
arXiv Detail & Related papers (2022-12-08T06:24:08Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Weakly Supervised Context Encoder using DICOM metadata in Ultrasound
Imaging [7.370841471918351]
We leverage DICOM metadata from ultrasound images to help learn representations of the ultrasound image.
We demonstrate that the proposed method outperforms the non-metadata based approaches across different downstream tasks.
arXiv Detail & Related papers (2020-03-20T02:17:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.