Multi-Scale Transformer Architecture for Accurate Medical Image Classification
- URL: http://arxiv.org/abs/2502.06243v1
- Date: Mon, 10 Feb 2025 08:22:25 GMT
- Title: Multi-Scale Transformer Architecture for Accurate Medical Image Classification
- Authors: Jiacheng Hu, Yanlin Xiang, Yang Lin, Junliang Du, Hanchao Zhang, Houze Liu,
- Abstract summary: This study introduces an AI-driven skin lesion classification algorithm built on an enhanced Transformer architecture.
By integrating a multi-scale feature fusion mechanism and refining the self-attention process, the model effectively extracts both global and local features.
Performance evaluation on the ISIC 2017 dataset demonstrates that the improved Transformer surpasses established AI models.
- Score: 4.578375402082224
- License:
- Abstract: This study introduces an AI-driven skin lesion classification algorithm built on an enhanced Transformer architecture, addressing the challenges of accuracy and robustness in medical image analysis. By integrating a multi-scale feature fusion mechanism and refining the self-attention process, the model effectively extracts both global and local features, enhancing its ability to detect lesions with ambiguous boundaries and intricate structures. Performance evaluation on the ISIC 2017 dataset demonstrates that the improved Transformer surpasses established AI models, including ResNet50, VGG19, ResNext, and Vision Transformer, across key metrics such as accuracy, AUC, F1-Score, and Precision. Grad-CAM visualizations further highlight the interpretability of the model, showcasing strong alignment between the algorithm's focus areas and actual lesion sites. This research underscores the transformative potential of advanced AI models in medical imaging, paving the way for more accurate and reliable diagnostic tools. Future work will explore the scalability of this approach to broader medical imaging tasks and investigate the integration of multimodal data to enhance AI-driven diagnostic frameworks for intelligent healthcare.
Related papers
- Residual Connection Networks in Medical Image Processing: Exploration of ResUnet++ Model Driven by Human Computer Interaction [0.4915744683251151]
This paper introduces ResUnet++, an advanced hybrid model combining ResNet and Unet++.
It is designed to improve tumour detection and localisation while fostering seamless interaction between clinicians and medical imaging systems.
By incorporating HCI principles, the model provides intuitive, real-time feedback, enabling clinicians to visualise and interact with tumour localisation results effectively.
arXiv Detail & Related papers (2024-12-30T04:57:26Z) - Improved EATFormer: A Vision Transformer for Medical Image Classification [0.0]
This paper presents an improved Algorithm-based Transformer architecture for medical image classification using Vision Transformers.
The proposed EATFormer architecture combines the strengths of Convolutional Neural Networks and Vision Transformers.
Experimental results on the Chest X-ray and Kvasir datasets demonstrate that the proposed EATFormer significantly improves prediction speed and accuracy compared to baseline models.
arXiv Detail & Related papers (2024-03-19T21:40:20Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Improving Medical Report Generation with Adapter Tuning and Knowledge
Enhancement in Vision-Language Foundation Models [26.146579369491718]
This study builds upon the state-of-the-art vision-language pre-training and fine-tuning approach, BLIP-2, to customize general large-scale foundation models.
Validation on the dataset of ImageCLEFmedical 2023 demonstrates our model's prowess, achieving the best-averaged results against several state-of-the-art methods.
arXiv Detail & Related papers (2023-12-07T01:01:45Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Invariant Scattering Transform for Medical Imaging [0.0]
Invariant Scattering Transform (IST) technique has become popular for medical image analysis.
IST aims to be invariant to transformations that are common in medical images.
IST can be integrated into machine learning algorithms for disease detection, diagnosis, and treatment planning.
arXiv Detail & Related papers (2023-04-20T18:12:50Z) - MedViT: A Robust Vision Transformer for Generalized Medical Image
Classification [4.471084427623774]
We propose a robust yet efficient CNN-Transformer hybrid model which is equipped with the locality of CNNs and the global connectivity of vision Transformers.
Our proposed hybrid model demonstrates its high robustness and generalization ability compared to the state-of-the-art studies on a large-scale collection of standardized MedMNIST-2D datasets.
arXiv Detail & Related papers (2023-02-19T02:55:45Z) - A Data-scalable Transformer for Medical Image Segmentation:
Architecture, Model Efficiency, and Benchmark [45.543140413399506]
MedFormer is a data-scalable Transformer designed for generalizable 3D medical image segmentation.
Our approach incorporates three key elements: a desirable inductive bias, hierarchical modeling with linear-complexity attention, and multi-scale feature fusion.
arXiv Detail & Related papers (2022-02-28T22:59:42Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.