Radiologist-level Performance by Using Deep Learning for Segmentation of
Breast Cancers on MRI Scans
- URL: http://arxiv.org/abs/2009.09827v2
- Date: Tue, 12 Apr 2022 16:31:16 GMT
- Title: Radiologist-level Performance by Using Deep Learning for Segmentation of
Breast Cancers on MRI Scans
- Authors: Lukas Hirsch, Yu Huang, Shaojun Luo, Carolina Rossi Saccarelli,
Roberto Lo Gullo, Isaac Daimiel Naranjo, Almir G.V. Bitencourt, Natsuko
Onishi, Eun Sook Ko, Doris Leithner, Daly Avendano, Sarah Eskreis-Winkler,
Mary Hughes, Danny F. Martinez, Katja Pinker, Krishna Juluru, Amin E.
El-Rowmeim, Pierre Elnajjar, Elizabeth A. Morris, Hernan A. Makse, Lucas C
Parra, Elizabeth J. Sutton
- Abstract summary: The highest-performing network on the training set was a 3D U-Net with dynamic contrast-enhanced MRI as input and with intensity normalized for each examination.
The performance of the network was equivalent to that of the radiologists.
- Score: 0.7010404660170962
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Purpose: To develop a deep network architecture that would achieve fully
automated radiologist-level segmentation of cancers at breast MRI. Materials
and Methods: In this retrospective study, 38229 examinations (composed of 64063
individual breast scans from 14475 patients) were performed in female patients
(age range, 12-94 years; mean age, 52 years +/- 10 [standard deviation]) who
presented between 2002 and 2014 at a single clinical site. A total of 2555
breast cancers were selected that had been segmented on two-dimensional (2D)
images by radiologists, as well as 60108 benign breasts that served as examples
of noncancerous tissue; all these were used for model training. For testing, an
additional 250 breast cancers were segmented independently on 2D images by four
radiologists. Authors selected among several three-dimensional (3D) deep
convolutional neural network architectures, input modalities, and harmonization
methods. The outcome measure was the Dice score for 2D segmentation, which was
compared between the network and radiologists by using the Wilcoxon signed rank
test and the two one-sided test procedure. Results: The highest-performing
network on the training set was a 3D U-Net with dynamic contrast-enhanced MRI
as input and with intensity normalized for each examination. In the test set,
the median Dice score of this network was 0.77 (interquartile range, 0.26). The
performance of the network was equivalent to that of the radiologists (two
one-sided test procedures with radiologist performance of 0.69-0.84 as
equivalence bounds, P <= .001 for both; n = 250). Conclusion: When trained on a
sufficiently large dataset, the developed 3D U-Net performed as well as
fellowship-trained radiologists in detailed 2D segmentation of breast cancers
at routine clinical MRI.
Related papers
- Classification of Prostate Cancer in 3D Magnetic Resonance Imaging Data based on Convolutional Neural Networks [0.0]
Prostate cancer is a commonly diagnosed cancerous disease among men world-wide.
CNN are evaluated on their abilities to reliably classify whether an MRI sequence contains malignant lesions.
The best result was achieved by a ResNet3D, yielding an average precision score of 0.4583 and AUC ROC score of 0.6214.
arXiv Detail & Related papers (2024-04-16T13:18:02Z) - Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS
2022 Challenge Solution [0.0]
This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge.
We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI.
It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively.
arXiv Detail & Related papers (2022-12-19T09:14:23Z) - Joint nnU-Net and Radiomics Approaches for Segmentation and Prognosis of
Head and Neck Cancers with PET/CT images [6.361835964390572]
3D nnU-Net architecture was adopted to automatic segmentation of primary tumor and lymph nodes synchronously.
Three prognostic models were constructed containing conventional and radiomics features alone.
Dice score and C-index were used as evaluation metrics for segmentation and prognosis task.
arXiv Detail & Related papers (2022-11-18T10:31:26Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Comparison of automatic prostate zones segmentation models in MRI images
using U-net-like architectures [0.9786690381850356]
Prostate cancer is the sixth leading cause of cancer death in males worldwide.
Currently, the segmentation of Regions of Interest (ROI) containing a tumor tissue is carried out manually by expert doctors.
Several research works have tackled the challenge of automatically segmenting and extracting features of the ROI from magnetic resonance images.
In this work, six deep learning models were trained and analyzed with a dataset of MRI images obtained from the Centre Hospitalaire de Dijon and Universitat Politecnica de Catalunya.
arXiv Detail & Related papers (2022-07-19T18:00:41Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - A neural network model that learns differences in diagnosis strategies
among radiologists has an improved area under the curve for aneurysm status
classification in magnetic resonance angiography image series [0.0]
This retrospective study included 3423 time-of-flight brain magnetic resonance angiography image series.
The image series were read independently for aneurysm status by one of four board-certified radiologists.
The constructed neural networks were trained to classify the aneurysm status of zero to five aneurysm-suspicious areas.
arXiv Detail & Related papers (2020-02-03T19:19:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.