TransReg: Cross-transformer as auto-registration module for multi-view
mammogram mass detection
- URL: http://arxiv.org/abs/2311.05192v1
- Date: Thu, 9 Nov 2023 08:08:12 GMT
- Title: TransReg: Cross-transformer as auto-registration module for multi-view
mammogram mass detection
- Authors: Hoang C. Nguyen, Chi Phan, Hieu H. Pham
- Abstract summary: We present TransReg, a Computer-Aided Detection (CAD) system designed to exploit the relationship between craniocaudal (CC) and mediolateral oblique (MLO) views.
The system includes cross-transformer to model the relationship between the region of interest (RoIs) extracted by siamese Faster RCNN network for mass detection problems.
Our work is the first time cross-transformer has been integrated into an object detection framework to model the relation between ipsilateral views.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Screening mammography is the most widely used method for early breast cancer
detection, significantly reducing mortality rates. The integration of
information from multi-view mammograms enhances radiologists' confidence and
diminishes false-positive rates since they can examine on dual-view of the same
breast to cross-reference the existence and location of the lesion. Inspired by
this, we present TransReg, a Computer-Aided Detection (CAD) system designed to
exploit the relationship between craniocaudal (CC), and mediolateral oblique
(MLO) views. The system includes cross-transformer to model the relationship
between the region of interest (RoIs) extracted by siamese Faster RCNN network
for mass detection problems. Our work is the first time cross-transformer has
been integrated into an object detection framework to model the relation
between ipsilateral views. Our experimental evaluation on DDSM and VinDr-Mammo
datasets shows that our TransReg, equipped with SwinT as a feature extractor
achieves state-of-the-art performance. Specifically, at the false positive rate
per image at 0.5, TransReg using SwinT gets a recall at 83.3% for DDSM dataset
and 79.7% for VinDr-Mammo dataset. Furthermore, we conduct a comprehensive
analysis to demonstrate that cross-transformer can function as an
auto-registration module, aligning the masses in dual-view and utilizing this
information to inform final predictions. It is a replication diagnostic
workflow of expert radiologists
Related papers
- Features Fusion for Dual-View Mammography Mass Detection [1.5146068448101746]
We propose a new model called MAMM-Net, which allows the processing of both mammography views simultaneously.
Our experiments show superior performance on the publicM dataset compared to the previous state-of-the-art model.
arXiv Detail & Related papers (2024-04-25T16:30:30Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - MV-Swin-T: Mammogram Classification with Multi-view Swin Transformer [0.257133335028485]
We propose an innovative multi-view network based on transformers to address challenges in mammographic image classification.
Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information.
arXiv Detail & Related papers (2024-02-26T04:41:04Z) - Intelligent Breast Cancer Diagnosis with Heuristic-assisted
Trans-Res-U-Net and Multiscale DenseNet using Mammogram Images [0.0]
Breast cancer (BC) significantly contributes to cancer-related mortality in women.
accurately distinguishing malignant mass lesions remains challenging.
We propose a novel deep learning approach for BC screening utilizing mammography images.
arXiv Detail & Related papers (2023-10-30T10:22:14Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Check and Link: Pairwise Lesion Correspondence Guides Mammogram Mass
Detection [26.175654159429943]
CL-Net is proposed to learn lesion detection and pairwise correspondence in an end-to-end manner.
CL-Net achieves precise understanding of pairwise lesion correspondences.
It outperforms previous methods by a large margin in low FPI regime.
arXiv Detail & Related papers (2022-09-13T08:26:07Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - Transformers Improve Breast Cancer Diagnosis from Unregistered
Multi-View Mammograms [6.084894198369222]
We leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the same patient in one examination.
Our four-image (two-view-two-side) Transformer-based model achieves case classification with an area under ROC curve (AUC = 0.818)
It also outperforms two one-view-two-side models that achieve AUC of 0.724 (CC view) and 0.769 (MLO view)
arXiv Detail & Related papers (2022-06-21T03:54:21Z) - High-Performance Transformer Tracking [74.07751002861802]
We present a Transformer tracking (named TransT) method based on the Siamese-like feature extraction backbone, the designed attention-based fusion mechanism, and the classification and regression head.
Experiments show that our TransT and TransT-M methods achieve promising results on seven popular datasets.
arXiv Detail & Related papers (2022-03-25T09:33:29Z) - AlignTransformer: Hierarchical Alignment of Visual Regions and Disease
Tags for Medical Report Generation [50.21065317817769]
We propose an AlignTransformer framework, which includes the Align Hierarchical Attention (AHA) and the Multi-Grained Transformer (MGT) modules.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that the AlignTransformer can achieve results competitive with state-of-the-art methods on the two datasets.
arXiv Detail & Related papers (2022-03-18T13:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.