Integrating AI for Human-Centric Breast Cancer Diagnostics: A Multi-Scale and Multi-View Swin Transformer Framework
- URL: http://arxiv.org/abs/2503.13309v1
- Date: Mon, 17 Mar 2025 15:48:56 GMT
- Title: Integrating AI for Human-Centric Breast Cancer Diagnostics: A Multi-Scale and Multi-View Swin Transformer Framework
- Authors: Farnoush Bayatmakou, Reza Taleei, Milad Amir Toutounchian, Arash Mohammadi,
- Abstract summary: The paper focuses on the integration of AI within a Human-Centric workflow to enhance breast cancer diagnostics.<n>We propose a hybrid, multi-scale and multi-view Swin Transformer-based framework (MSMV-Swin) that enhances diagnostic robustness and accuracy.
- Score: 5.211860566766601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite advancements in Computer-Aided Diagnosis (CAD) systems, breast cancer remains one of the leading causes of cancer-related deaths among women worldwide. Recent breakthroughs in Artificial Intelligence (AI) have shown significant promise in development of advanced Deep Learning (DL) architectures for breast cancer diagnosis through mammography. In this context, the paper focuses on the integration of AI within a Human-Centric workflow to enhance breast cancer diagnostics. Key challenges are, however, largely overlooked such as reliance on detailed tumor annotations and susceptibility to missing views, particularly during test time. To address these issues, we propose a hybrid, multi-scale and multi-view Swin Transformer-based framework (MSMV-Swin) that enhances diagnostic robustness and accuracy. The proposed MSMV-Swin framework is designed to work as a decision-support tool, helping radiologists analyze multi-view mammograms more effectively. More specifically, the MSMV-Swin framework leverages the Segment Anything Model (SAM) to isolate the breast lobe, reducing background noise and enabling comprehensive feature extraction. The multi-scale nature of the proposed MSMV-Swin framework accounts for tumor-specific regions as well as the spatial characteristics of tissues surrounding the tumor, capturing both localized and contextual information. The integration of contextual and localized data ensures that MSMV-Swin's outputs align with the way radiologists interpret mammograms, fostering better human-AI interaction and trust. A hybrid fusion structure is then designed to ensure robustness against missing views, a common occurrence in clinical practice when only a single mammogram view is available.
Related papers
- Augmented Intelligence for Multimodal Virtual Biopsy in Breast Cancer Using Generative Artificial Intelligence [0.8714814768600079]
Full-Field Digital Mammography (FFDM) is the primary imaging modality for routine breast cancer screening.<n>CESM, a second-level imaging technique, offers enhanced accuracy in tumor detection.<n>CESM is typically reserved for select cases, leaving many patients to rely solely on FFDM.<n>We introduce a multimodal, multi-view deep learning approach for virtual biopsy.
arXiv Detail & Related papers (2025-01-31T14:41:17Z) - Enhanced MRI Representation via Cross-series Masking [48.09478307927716]
Cross-Series Masking (CSM) Strategy for effectively learning MRI representation in a self-supervised manner.<n>Method achieves state-of-the-art performance on both public and in-house datasets.
arXiv Detail & Related papers (2024-12-10T10:32:09Z) - A Large Model for Non-invasive and Personalized Management of Breast Cancer from Multiparametric MRI [19.252851972152957]
We develop a large mixture-of-modality-experts model (MOME) that integrates multi-parametric MRI information within a unified structure.
MOME matches four senior radiologists' performance in identifying breast cancer and outperforms a junior radiologist.
arXiv Detail & Related papers (2024-08-08T05:04:13Z) - Advancing Histopathology-Based Breast Cancer Diagnosis: Insights into Multi-Modality and Explainability [2.8145472964232137]
Using multi-modal techniques, integrating both image and non-image data, marks a transformative advancement in breast cancer diagnosis.
This review utilizes multi-modal data and emphasizes explainability to enhance diagnostic accuracy, clinician confidence, and patient engagement.
arXiv Detail & Related papers (2024-06-07T19:23:22Z) - Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques [38.321248253111776]
Article explores the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer.
Aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications.
arXiv Detail & Related papers (2024-06-01T18:50:03Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Intelligent Breast Cancer Diagnosis with Heuristic-assisted
Trans-Res-U-Net and Multiscale DenseNet using Mammogram Images [0.0]
Breast cancer (BC) significantly contributes to cancer-related mortality in women.
accurately distinguishing malignant mass lesions remains challenging.
We propose a novel deep learning approach for BC screening utilizing mammography images.
arXiv Detail & Related papers (2023-10-30T10:22:14Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - MammoDG: Generalisable Deep Learning Breaks the Limits of Cross-Domain
Multi-Center Breast Cancer Screening [4.587250201300601]
Mammography poses challenges due to the high variability and patterns in mammograms.
MammoDG is a novel deep-learning framework for generalisable and reliable analysis of cross-domain multi-center mammography data.
arXiv Detail & Related papers (2023-08-02T10:10:22Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Act Like a Radiologist: Towards Reliable Multi-view Correspondence
Reasoning for Mammogram Mass Detection [49.14070210387509]
We propose an Anatomy-aware Graph convolutional Network (AGN) for mammogram mass detection.
AGN is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability.
Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-21T06:48:34Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.