FECT: Classification of Breast Cancer Pathological Images Based on Fusion Features
- URL: http://arxiv.org/abs/2501.10128v1
- Date: Fri, 17 Jan 2025 11:32:33 GMT
- Title: FECT: Classification of Breast Cancer Pathological Images Based on Fusion Features
- Authors: Jiacheng Hao, Yiqing Liu, Siqi Zeng, Yonghong He,
- Abstract summary: We propose a novel breast cancer tissue classification model that Fused features of Edges, Cells, and Tissues (FECT)<n>Our model surpasses current advanced methods in terms of classification accuracy and F1 scores.<n>Our model exhibits interpretability and holds promise for significant roles in future clinical applications.
- Score: 1.9356426053533178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breast cancer is one of the most common cancers among women globally, with early diagnosis and precise classification being crucial. With the advancement of deep learning and computer vision, the automatic classification of breast tissue pathological images has emerged as a research focus. Existing methods typically rely on singular cell or tissue features and lack design considerations for morphological characteristics of challenging-to-classify categories, resulting in suboptimal classification performance. To address these problems, we proposes a novel breast cancer tissue classification model that Fused features of Edges, Cells, and Tissues (FECT), employing the ResMTUNet and an attention-based aggregator to extract and aggregate these features. Extensive testing on the BRACS dataset demonstrates that our model surpasses current advanced methods in terms of classification accuracy and F1 scores. Moreover, due to its feature fusion that aligns with the diagnostic approach of pathologists, our model exhibits interpretability and holds promise for significant roles in future clinical applications.
Related papers
- Histomorphology-driven multi-instance learning for breast cancer WSI classification [37.70113409383555]
Current whole slide image (WSI) classification methods struggle to effectively incorporate histomorphology information.
We propose a novel framework that explicitly incorporates histomorphology into WSI classification.
arXiv Detail & Related papers (2025-03-23T08:25:29Z) - Pathological Prior-Guided Multiple Instance Learning For Mitigating Catastrophic Forgetting in Breast Cancer Whole Slide Image Classification [50.899861205016265]
We propose a new framework PaGMIL to mitigate catastrophic forgetting in breast cancer WSI classification.
Our framework introduces two key components into the common MIL model architecture.
We evaluate the continual learning performance of PaGMIL across several public breast cancer datasets.
arXiv Detail & Related papers (2025-03-08T04:51:58Z) - Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression Data [0.28675177318965045]
We propose a deep multimodal learning framework to classify breast cancer into BRCA. Luminal and BRCA.Basal / Her2 subtypes.
Our approach employs a ResNet-50 model for image feature extraction and fully connected layers for gene expression processing.
Our findings highlight the potential of deep learning for robust and interpretable breast cancer subtype classification.
arXiv Detail & Related papers (2025-03-04T18:24:33Z) - Enhanced MRI Representation via Cross-series Masking [48.09478307927716]
Cross-Series Masking (CSM) Strategy for effectively learning MRI representation in a self-supervised manner.<n>Method achieves state-of-the-art performance on both public and in-house datasets.
arXiv Detail & Related papers (2024-12-10T10:32:09Z) - Leveraging Medical Foundation Model Features in Graph Neural Network-Based Retrieval of Breast Histopathology Images [1.48419209885019]
We propose a novel attention-based adversarially regularized variational graph autoencoder model for breast histological image retrieval.<n>Our top-performing model, trained with UNI features, achieved average mAP/mMV scores of 96.7%/91.5% and 97.6%/94.2% for the BreakHis and BACH datasets, respectively.
arXiv Detail & Related papers (2024-05-07T11:24:37Z) - Single-Cell Deep Clustering Method Assisted by Exogenous Gene
Information: A Novel Approach to Identifying Cell Types [50.55583697209676]
We develop an attention-enhanced graph autoencoder, which is designed to efficiently capture the topological features between cells.
During the clustering process, we integrated both sets of information and reconstructed the features of both cells and genes to generate a discriminative representation.
This research offers enhanced insights into the characteristics and distribution of cells, thereby laying the groundwork for early diagnosis and treatment of diseases.
arXiv Detail & Related papers (2023-11-28T09:14:55Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Mapping the landscape of histomorphological cancer phenotypes using
self-supervised learning on unlabeled, unannotated pathology slides [9.27127895781971]
Histomorphological Phenotype Learning operates via the automatic discovery of discriminatory image features in small image tiles.
Tiles are grouped into morphologically similar clusters which constitute a library of histomorphological phenotypes.
arXiv Detail & Related papers (2022-05-04T08:06:55Z) - Mammograms Classification: A Review [0.0]
Mammogram images have been utilized in developing computer-aided diagnosis systems.
Researchers have proved that artificial intelligence with its emerging technologies can be used in the early detection of the disease.
arXiv Detail & Related papers (2022-03-04T19:22:35Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - DenseNet for Breast Tumor Classification in Mammographic Images [0.0]
The aim of this study is to build a deep convolutional neural network method for automatic detection, segmentation, and classification of breast lesions in mammography images.
Based on deep learning the Mask-CNN (RoIAlign) method was developed to features selection and extraction; and the classification was carried out by DenseNet architecture.
arXiv Detail & Related papers (2021-01-24T03:30:59Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.