GasHis-Transformer: A Multi-scale Visual Transformer Approach for
Gastric Histopathology Image Classification
- URL: http://arxiv.org/abs/2104.14528v2
- Date: Fri, 30 Apr 2021 01:58:26 GMT
- Title: GasHis-Transformer: A Multi-scale Visual Transformer Approach for
Gastric Histopathology Image Classification
- Authors: Haoyuan Chen, Chen Li, Xiaoyan Li, Weiming Hu, Yixin Li, Wanli Liu,
Changhao Sun, Yudong Yao, Marcin Grzegorzek
- Abstract summary: This paper proposes a multi-scale visual transformer model (GasHis-Transformer) for a gastric histopathology image classification (GHIC) task.
GasHis-Transformer model is built on two fundamental modules, including a global information module (GIM) and a local information module (LIM)
- Score: 30.497184157710873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For deep learning methods applied to the diagnosis of gastric cancer
intelligently, existing methods concentrate more on Convolutional Neural
Networks (CNN) but no approaches are available using Visual Transformer (VT).
VT's efficient and stable deep learning models with the most recent application
in the field of computer vision, which is capable of improving the recognition
of global information in images. In this paper, a multi-scale visual
transformer model (GasHis-Transformer) is proposed for a gastric histopathology
image classification (GHIC) task, which enables the automatic classification of
gastric histological images of abnormal and normal cancer by obtained by
optical microscopy to facilitate the medical work of histopathologists. This
GasHis-Transformer model is built on two fundamental modules, including a
global information module (GIM) and a local information module (LIM). In the
experiment, an open source hematoxylin and eosin (H&E) stained gastric
histopathology dataset with 280 abnormal or normal images are divided into
training, validation, and test sets at a ratio of 1:1:2 first. Then,
GasHis-Transformer obtains precision, recall, F1-score, and accuracy on the
testing set of 98.0%, 100.0%, 96.0%, and 98.0%. Furthermore, a contrast
experiment also tests the generalization ability of the proposed
GatHis-Transformer model with a lymphoma image dataset including 374 images and
a breast cancer dataset including 1390 images in two extended experiments and
achieves an accuracy of 83.9% and 89.4%, respectively. Finally,
GasHis-Transformer model demonstrates high classification performance and shows
its effectiveness and enormous potential in GHIC tasks.
Related papers
- Two-stage Cytopathological Image Synthesis for Augmenting Cervical
Abnormality Screening [13.569003698448]
Pathological image synthesis is naturally raised to minimize the efforts in data collection and annotation.
We propose a two-stage image synthesis framework to create synthetic data for augmenting cervical abnormality screening.
Our experiments demonstrate the synthetic image quality, diversity, and controllability of the proposed synthesis framework.
arXiv Detail & Related papers (2024-02-22T17:06:47Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Gastrointestinal Disorder Detection with a Transformer Based Approach [0.0]
This paper describes a technique for assisting medical diagnosis procedures and identifying gastrointestinal tract disorders based on the categorization of characteristics taken from endoscopic pictures.
We have suggested a vision transformer based approach to detect gastrointestianl diseases from wireless capsule endoscopy (WCE) curated images of colon with an accuracy of 95.63%.
We have compared this transformer based approach with pretrained convolutional neural network (CNN) model DenseNet201 and demonstrated that vision transformer surpassed DenseNet201 in various quantitative performance evaluation metrics.
arXiv Detail & Related papers (2022-10-06T19:08:37Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Comparison of different CNNs for breast tumor classification from
ultrasound images [12.98780709853981]
classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task.
We compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification.
The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934.
arXiv Detail & Related papers (2020-12-28T22:54:08Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Pyramid Focusing Network for mutation prediction and classification in
CT images [2.4440097656693553]
We propose a pyramid focusing network (PFNet) for mutation prediction and classification based on CT images.
Our method achieves the accuracy of 94.90% in predicting the HER-2 genes mutation status at the CT image.
arXiv Detail & Related papers (2020-04-07T12:14:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.