Hybrid Deep Learning and Handcrafted Feature Fusion for Mammographic Breast Cancer Classification
- URL: http://arxiv.org/abs/2507.19843v1
- Date: Sat, 26 Jul 2025 07:36:44 GMT
- Title: Hybrid Deep Learning and Handcrafted Feature Fusion for Mammographic Breast Cancer Classification
- Authors: Maximilian Tschuchnig, Michael Gadermayr, Khalifa Djemal,
- Abstract summary: We present a hybrid framework combining deep convolutional features from a ResNet-50 backbone with handcrafted descriptors and transformer-based embeddings.<n>Using the CBIS-DDSM dataset, we benchmark our ResNet-50 baseline (AUC: 78.1%) and demonstrate that fusing handcrafted features with deep ResNet-50 and DINOv2 features improves AUC to 79.6%.
- Score: 0.21847754147782888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated breast cancer classification from mammography remains a significant challenge due to subtle distinctions between benign and malignant tissue. In this work, we present a hybrid framework combining deep convolutional features from a ResNet-50 backbone with handcrafted descriptors and transformer-based embeddings. Using the CBIS-DDSM dataset, we benchmark our ResNet-50 baseline (AUC: 78.1%) and demonstrate that fusing handcrafted features with deep ResNet-50 and DINOv2 features improves AUC to 79.6% (setup d1), with a peak recall of 80.5% (setup d1) and highest F1 score of 67.4% (setup d1). Our experiments show that handcrafted features not only complement deep representations but also enhance performance beyond transformer-based embeddings. This hybrid fusion approach achieves results comparable to state-of-the-art methods while maintaining architectural simplicity and computational efficiency, making it a practical and effective solution for clinical decision support.
Related papers
- Hybrid deep learning-based strategy for the hepatocellular carcinoma cancer grade classification of H&E stained liver histopathology images [2.833640239679924]
Hepatocellular carcinoma (HCC) is a common type of liver cancer whose early-stage diagnosis is a common challenge.<n>We propose a hybrid deep learning-based architecture that uses transfer learning to extract the features from pre-trained convolutional neural network (CNN) models.<n>The proposed hybrid model showed improvement in accuracy of 2% and 4% over the pre-trained models in TCGA-LIHC and KMC databases.
arXiv Detail & Related papers (2024-12-04T07:26:36Z) - TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs [49.69047720285225]
We propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures.
We empirically validate emphTopoTxR using the VICTRE phantom breast dataset.
Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-na"ive imaging.
arXiv Detail & Related papers (2024-11-05T19:35:10Z) - Hybrid-Segmentor: A Hybrid Approach to Automated Fine-Grained Crack Segmentation in Civil Infrastructure [52.2025114590481]
We introduce Hybrid-Segmentor, an encoder-decoder based approach that is capable of extracting both fine-grained local and global crack features.
This allows the model to improve its generalization capabilities in distinguish various type of shapes, surfaces and sizes of cracks.
The proposed model outperforms existing benchmark models across 5 quantitative metrics (accuracy 0.971, precision 0.804, recall 0.744, F1-score 0.770, and IoU score 0.630), achieving state-of-the-art status.
arXiv Detail & Related papers (2024-09-04T16:47:16Z) - Handling Geometric Domain Shifts in Semantic Segmentation of Surgical RGB and Hyperspectral Images [67.66644395272075]
We present first analysis of state-of-the-art semantic segmentation models when faced with geometric out-of-distribution data.
We propose an augmentation technique called "Organ Transplantation" to enhance generalizability.
Our augmentation technique improves SOA model performance by up to 67 % for RGB data and 90 % for HSI data, achieving performance at the level of in-distribution performance on real OOD test data.
arXiv Detail & Related papers (2024-08-27T19:13:15Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Exploiting Precision Mapping and Component-Specific Feature Enhancement for Breast Cancer Segmentation and Identification [0.0]
We propose novel Deep Learning (DL) frameworks for breast lesion segmentation and classification.<n>We introduce a precision mapping mechanism (PMM) for a precision mapping and attention-driven LinkNet (PMAD-LinkNet) segmentation framework.<n>We also introduce a component-specific feature enhancement module (CSFEM) for a component-specific feature-enhanced classifier (CSFEC-Net)
arXiv Detail & Related papers (2024-07-03T06:40:26Z) - A Robust Pipeline for Classification and Detection of Bleeding Frames in Wireless Capsule Endoscopy using Swin Transformer and RT-DETR [1.7499351967216343]
Solution combines the Swin Transformer for the initial classification of bleeding frames and RT-DETR for further detection of bleeding.
On the validation set, this approach achieves a classification accuracy of 98.5% compared to 91.7% without any pre-processing.
On the test set, this approach achieves a classification accuracy and F1 score of 87.0% and 89.0% respectively.
arXiv Detail & Related papers (2024-06-12T09:58:42Z) - ResNet101 and DAE for Enhance Quality and Classification Accuracy in Skin Cancer Imaging [0.0]
We introduce an innovative convolutional ensemble network approach named deep autoencoder (DAE) with ResNet101.
This method utilizes convolution-based deep neural networks for the detection of skin cancer.
The methods result in 96.03% of accuracy, 95.40 % of precision, 96.05% of recall, 0.9576 of F-measure, 0.98 of AUC.
arXiv Detail & Related papers (2024-03-21T09:07:28Z) - Dual-Activated Lightweight Attention ResNet50 for Automatic Histopathology Breast Cancer Image Classification [0.0]
This study introduces a novel method for breast cancer classification, the Dual-Activated Lightweight Attention ResNet50 model.
It integrates a pre-trained ResNet50 model with a lightweight attention mechanism, embedding an attention module in the fourth layer of ResNet50.
The DALAResNet50 method was tested on breast cancer histopathology images from the BreakHis Database across magnification factors of 40X, 100X, 200X, and 400X, achieving accuracies of 98.5%, 98.7%, 97.9%, and 94.3%, respectively.
arXiv Detail & Related papers (2023-08-25T03:08:41Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - Towards a Competitive End-to-End Speech Recognition for CHiME-6 Dinner
Party Transcription [73.66530509749305]
In this paper, we argue that, even in difficult cases, some end-to-end approaches show performance close to the hybrid baseline.
We experimentally compare and analyze CTC-Attention versus RNN-Transducer approaches along with RNN versus Transformer architectures.
Our best end-to-end model based on RNN-Transducer, together with improved beam search, reaches quality by only 3.8% WER abs. worse than the LF-MMI TDNN-F CHiME-6 Challenge baseline.
arXiv Detail & Related papers (2020-04-22T19:08:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.