Seeing Beyond Cancer: Multi-Institutional Validation of Object
Localization and 3D Semantic Segmentation using Deep Learning for Breast MRI
- URL: http://arxiv.org/abs/2311.16213v1
- Date: Mon, 27 Nov 2023 18:22:07 GMT
- Title: Seeing Beyond Cancer: Multi-Institutional Validation of Object
Localization and 3D Semantic Segmentation using Deep Learning for Breast MRI
- Authors: Arda Pekis, Vignesh Kannan, Evandros Kaklamanos, Anu Antony, Snehal
Patel, Tyler Earnest
- Abstract summary: We present a method that exploits tissue-tissue interactions to accurately segment every major tissue type in the breast.
By integrating multiple relevant peri-tumoral tissues, our work enables clinical applications in breast cancer staging, prognosis and surgical planning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The clinical management of breast cancer depends on an accurate understanding
of the tumor and its anatomical context to adjacent tissues and landmark
structures. This context may be provided by semantic segmentation methods;
however, previous works have been largely limited to a singular focus on the
tumor alone and rarely other tissue types. In contrast, we present a method
that exploits tissue-tissue interactions to accurately segment every major
tissue type in the breast including: chest wall, skin, adipose tissue,
fibroglandular tissue, vasculature and tumor via standard-of-care Dynamic
Contrast Enhanced MRI. Comparing our method to prior state-of-the-art, we
achieved a superior Dice score on tumor segmentation while maintaining
competitive performance on other studied tissues across multiple institutions.
Briefly, our method proceeds by localizing the tumor using 2D object detectors,
then segmenting the tumor and surrounding tissues independently using two 3D
U-nets, and finally integrating these results while mitigating false positives
by checking for anatomically plausible tissue-tissue contacts. The object
detection models were pre-trained on ImageNet and COCO, and operated on MIP
(maximum intensity projection) images in the axial and sagittal planes,
establishing a 3D tumor bounding box. By integrating multiple relevant
peri-tumoral tissues, our work enables clinical applications in breast cancer
staging, prognosis and surgical planning.
Related papers
- A novel method to compute the contact surface area between an organ and cancer tissue [81.84413479369512]
"contact surface area" (CSA) refers to the area of contact between a tumor and an organ.
We introduce an innovative method that relies on 3D reconstructions of tumors and organs to provide an accurate and objective estimate of the CSA.
arXiv Detail & Related papers (2024-01-19T14:34:34Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Brain Tumor Segmentation Network Using Attention-based Fusion and
Spatial Relationship Constraint [19.094164029068462]
We develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images.
We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs 2020)
arXiv Detail & Related papers (2020-10-29T14:51:10Z) - ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image
Segmentation [0.0]
We propose a novel deep neural network architecture, namely Enhanced Small Tumor-Aware Network (ESTAN) to accurately segment breast tumors.
ESTAN introduces two encoders to extract and fuse image context information at different scales and utilizes row-column-wise kernels in the encoder to adapt to breast anatomy.
arXiv Detail & Related papers (2020-09-27T16:42:59Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Perfusion Quantification from Endoscopic Videos: Learning to Read Tumor
Signatures [3.5769263034973697]
We propose a perfusion quantification method for computer-aided interpretation of subtle differences in dynamic perfusion patterns.
The method exploits the fact that vasculature arising from cancer angiogenesis gives tumors differing perfusion patterns from the surrounding tissue.
Experimental evaluation of our method on a cohort of colorectal cancer surgery endoscopic videos suggests that the proposed tumor signature is able to successfully discriminate between healthy, cancerous and benign tissue with 95% accuracy.
arXiv Detail & Related papers (2020-06-25T11:53:20Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.