Efficient Lung Cancer Image Classification and Segmentation Algorithm
Based on Improved Swin Transformer
- URL: http://arxiv.org/abs/2207.01527v1
- Date: Mon, 4 Jul 2022 15:50:06 GMT
- Title: Efficient Lung Cancer Image Classification and Segmentation Algorithm
Based on Improved Swin Transformer
- Authors: Ruina Sun, Yuexin Pang
- Abstract summary: transformer model has been applied to the field of computer vision (CV) after its success in natural language processing (NLP)
This paper creatively proposes a segmentation method based on efficient transformer and applies it to medical image analysis.
The algorithm completes the task of lung cancer classification and segmentation by analyzing lung cancer data, and aims to provide efficient technical support for medical staff.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of computer technology, various models have emerged in
artificial intelligence. The transformer model has been applied to the field of
computer vision (CV) after its success in natural language processing (NLP).
Radiologists continue to face multiple challenges in today's rapidly evolving
medical field, such as increased workload and increased diagnostic demands.
Although there are some conventional methods for lung cancer detection before,
their accuracy still needs to be improved, especially in realistic diagnostic
scenarios. This paper creatively proposes a segmentation method based on
efficient transformer and applies it to medical image analysis. The algorithm
completes the task of lung cancer classification and segmentation by analyzing
lung cancer data, and aims to provide efficient technical support for medical
staff. In addition, we evaluated and compared the results in various aspects.
For the classification mission, the max accuracy of Swin-T by regular training
and Swin-B in two resolutions by pre-training can be up to 82.3%. For the
segmentation mission, we use pre-training to help the model improve the
accuracy of our experiments. The accuracy of the three models reaches over 95%.
The experiments demonstrate that the algorithm can be well applied to lung
cancer classification and segmentation missions.
Related papers
- A Foundational Generative Model for Breast Ultrasound Image Analysis [42.618964727896156]
Foundational models have emerged as powerful tools for addressing various tasks in clinical settings.
We present BUSGen, the first foundational generative model specifically designed for breast ultrasound analysis.
With few-shot adaptation, BUSGen can generate repositories of realistic and informative task-specific data.
arXiv Detail & Related papers (2025-01-12T16:39:13Z) - A CT Image Classification Network Framework for Lung Tumors Based on Pre-trained MobileNetV2 Model and Transfer learning, And Its Application and Market Analysis in the Medical field [0.8249694498830561]
This paper proposes a deep learning network framework based on the pre-trained MobileNetV2 model.
The model achieves an accuracy of 99.6% on the test set, with significant improvements in feature extraction.
The potential of AI to improve diagnostic accuracy, reduce medical costs, and promote precision medicine will have a profound impact on the future development of the healthcare industry.
arXiv Detail & Related papers (2025-01-09T06:22:50Z) - Lung Disease Detection with Vision Transformers: A Comparative Study of Machine Learning Methods [0.0]
This study explores the application of Vision Transformers (ViT), a state-of-the-art architecture in machine learning, to chest X-ray analysis.
I present a comparative analysis of two ViT-based approaches: one utilizing full chest X-ray images and another focusing on segmented lung regions.
arXiv Detail & Related papers (2024-11-18T08:40:25Z) - Multi-modal Medical Image Fusion For Non-Small Cell Lung Cancer Classification [7.002657345547741]
Non-small cell lung cancer (NSCLC) is a predominant cause of cancer mortality worldwide.
In this paper, we introduce an innovative integration of multi-modal data, synthesizing fused medical imaging (CT and PET scans) with clinical health records and genomic data.
Our research surpasses existing approaches, as evidenced by a substantial enhancement in NSCLC detection and classification precision.
arXiv Detail & Related papers (2024-09-27T12:59:29Z) - Parameter-Efficient Methods for Metastases Detection from Clinical Notes [19.540079966780954]
The objective of this study is to automate the detection of metastatic liver disease from free-style computed tomography (CT) radiology reports.
Our research demonstrates that transferring knowledge using three approaches can improve model performance.
arXiv Detail & Related papers (2023-10-27T20:30:59Z) - Swin-Tempo: Temporal-Aware Lung Nodule Detection in CT Scans as Video
Sequences Using Swin Transformer-Enhanced UNet [2.7547288571938795]
We present an innovative model that harnesses the strengths of both convolutional neural networks and vision transformers.
Inspired by object detection in videos, we treat each 3D CT image as a video, individual slices as frames, and lung nodules as objects, enabling a time-series application.
arXiv Detail & Related papers (2023-10-05T07:48:55Z) - Validated respiratory drug deposition predictions from 2D and 3D medical
images with statistical shape models and convolutional neural networks [47.187609203210705]
We aim to develop and validate an automated computational framework for patient-specific deposition modelling.
An image processing approach is proposed that could produce 3D patient respiratory geometries from 2D chest X-rays and 3D CT images.
arXiv Detail & Related papers (2023-03-02T07:47:07Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Lung Cancer Lesion Detection in Histopathology Images Using Graph-Based
Sparse PCA Network [93.22587316229954]
We propose a graph-based sparse principal component analysis (GS-PCA) network, for automated detection of cancerous lesions on histological lung slides stained by hematoxylin and eosin (H&E)
We evaluate the performance of the proposed algorithm on H&E slides obtained from an SVM K-rasG12D lung cancer mouse model using precision/recall rates, F-score, Tanimoto coefficient, and area under the curve (AUC) of the receiver operator characteristic (ROC)
arXiv Detail & Related papers (2021-10-27T19:28:36Z) - OncoPetNet: A Deep Learning based AI system for mitotic figure counting
on H&E stained whole slide digital images in a large veterinary diagnostic
lab setting [47.38796928990688]
Multiple state-of-the-art deep learning techniques for histopathology image classification and mitotic figure detection were used in the development of OncoPetNet.
The proposed system, demonstrated significantly improved mitotic counting performance for 41 cancer cases across 14 cancer types compared to human expert baselines.
In deployment, an effective 0.27 min/slide inference was achieved in a high throughput veterinary diagnostic service across 2 centers processing 3,323 digital whole slide images daily.
arXiv Detail & Related papers (2021-08-17T20:01:33Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.