Deep Learning Predicts Mammographic Breast Density in Clinical Breast Ultrasound Images
- URL: http://arxiv.org/abs/2411.00891v2
- Date: Thu, 07 Nov 2024 21:25:07 GMT
- Title: Deep Learning Predicts Mammographic Breast Density in Clinical Breast Ultrasound Images
- Authors: Arianna Bunnell, Dustin Valdez, Thomas K. Wolfgruber, Brandon Quon, Kailee Hung, Brenda Y. Hernandez, Todd B. Seto, Jeffrey Killeen, Marshall Miyoshi, Peter Sadowski, John A. Shepherd,
- Abstract summary: mammographic breast density is one of the strongest risk factors for breast cancer.
Breast ultrasound (BUS) is an alternative breast cancer screening modality.
The purpose of this study was to explore an artificial intelligence (AI) model to predict BI-RADS mammographic breast density from BUS imaging.
- Score: 0.0
- License:
- Abstract: Background: Breast density, as derived from mammographic images and defined by the American College of Radiology's Breast Imaging Reporting and Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound (BUS) is an alternative breast cancer screening modality, particularly useful for early detection in low-resource, rural contexts. The purpose of this study was to explore an artificial intelligence (AI) model to predict BI-RADS mammographic breast density category from clinical, handheld BUS imaging. Methods: All data are sourced from the Hawaii and Pacific Islands Mammography Registry. We compared deep learning methods from BUS imaging, as well as machine learning models from image statistics alone. The use of AI-derived BUS density as a risk factor for breast cancer was then compared to clinical BI-RADS breast density while adjusting for age. The BUS data were split by individual into 70/20/10% groups for training, validation, and testing. Results: 405,120 clinical BUS images from 14.066 women were selected for inclusion in this study, resulting in 9.846 women for training (302,574 images), 2,813 for validation (11,223 images), and 1,406 for testing (4,042 images). On the held-out testing set, the strongest AI model achieves AUROC 0.854 predicting BI-RADS mammographic breast density from BUS imaging and outperforms all shallow machine learning methods based on image statistics. In cancer risk prediction, age-adjusted AI BUS breast density predicted 5-year breast cancer risk with 0.633 AUROC, as compared to 0.637 AUROC from age-adjusted clinical breast density. Conclusions: BI-RADS mammographic breast density can be estimated from BUS imaging with high accuracy using a deep learning model. Furthermore, we demonstrate that AI-derived BUS breast density is predictive of 5-year breast cancer risk in our population.
Related papers
- Improving Breast Cancer Grade Prediction with Multiparametric MRI Created Using Optimized Synthetic Correlated Diffusion Imaging [71.91773485443125]
Grading plays a vital role in breast cancer treatment planning.
The current tumor grading method involves extracting tissue from patients, leading to stress, discomfort, and high medical costs.
This paper examines using optimized CDI$s$ to improve breast cancer grade prediction.
arXiv Detail & Related papers (2024-05-13T15:48:26Z) - Cancer-Net BCa-S: Breast Cancer Grade Prediction using Volumetric Deep
Radiomic Features from Synthetic Correlated Diffusion Imaging [82.74877848011798]
The prevalence of breast cancer continues to grow, affecting about 300,000 females in the United States in 2023.
The gold-standard Scarff-Bloom-Richardson (SBR) grade has been shown to consistently indicate a patient's response to chemotherapy.
In this paper, we study the efficacy of deep learning for breast cancer grading based on synthetic correlated diffusion (CDI$s$) imaging.
arXiv Detail & Related papers (2023-04-12T15:08:34Z) - A Multi-Institutional Open-Source Benchmark Dataset for Breast Cancer
Clinical Decision Support using Synthetic Correlated Diffusion Imaging Data [82.74877848011798]
Cancer-Net BCa is a multi-institutional open-source benchmark dataset of volumetric CDI$s$ imaging data of breast cancer patients.
Cancer-Net BCa is publicly available as a part of a global open-source initiative dedicated to accelerating advancement in machine learning to aid clinicians in the fight against cancer.
arXiv Detail & Related papers (2023-04-12T05:41:44Z) - Multi-Head Feature Pyramid Networks for Breast Mass Detection [48.24995569980701]
We propose the multi-head feature pyramid module (MHFPN) to solve the problem of unbalanced focus of target boxes during feature map fusion.
Experimental studies show that, comparing to the SOTA detection baselines, our method improves by 6.58% (in AP@50) and 5.4% (in TPR@50) on the commonly used INbreast dataset.
arXiv Detail & Related papers (2023-02-22T03:02:52Z) - High-resolution synthesis of high-density breast mammograms: Application
to improved fairness in deep learning based mass detection [48.88813637974911]
Computer-aided detection systems based on deep learning have shown good performance in breast cancer detection.
High-density breasts show poorer detection performance since dense tissues can mask or even simulate masses.
This study aims to improve the mass detection performance in high-density breasts using synthetic high-density full-field digital mammograms.
arXiv Detail & Related papers (2022-09-20T15:57:12Z) - A multi-reconstruction study of breast density estimation using Deep
Learning [0.9449650062296825]
Breast density estimation is one of the key tasks performed during a screening exam.
Deep-learning studies for breast density estimation use only a single modality for training a neural network.
In this paper, we show that a neural network trained on all the modalities at once performs better than a neural network trained on any single modality.
arXiv Detail & Related papers (2022-02-16T18:34:08Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Ensemble Transfer Learning of Elastography and B-mode Breast Ultrasound
Images [3.3615086420912745]
We present an ensemble transfer learning model to classify benign and malignant breast tumors.
This model combines semantic features from AlexNet & ResNet models to classify benign from malignant tumors.
Experimental results show that our ensemble model achieves a sensitivity of 88.89% and specificity of 91.10%.
arXiv Detail & Related papers (2021-02-17T04:23:30Z) - Detection of masses and architectural distortions in digital breast
tomosynthesis: a publicly available dataset of 5,060 patients and a deep
learning model [4.3359550072619255]
We have curated and made publicly available a large-scale dataset of digital breast tomosynthesis images.
It contains 22,032 reconstructed volumes belonging to 5,610 studies from 5,060 patients.
We developed a single-phase deep learning detection model and tested it using our dataset to serve as a baseline for future research.
arXiv Detail & Related papers (2020-11-13T18:33:31Z) - Deep-LIBRA: Artificial intelligence method for robust quantification of
breast density with independent validation in breast cancer risk assessment [2.0369879867185143]
Current federal legislation mandates reporting of breast density for all women undergoing breast screening.
We introduce an artificial intelligence (AI) method to estimate breast percentage density (PD) from digital mammograms.
arXiv Detail & Related papers (2020-11-13T15:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.