LDCSF: Local depth convolution-based Swim framework for classifying
multi-label histopathology images
- URL: http://arxiv.org/abs/2308.10446v1
- Date: Mon, 21 Aug 2023 03:44:54 GMT
- Title: LDCSF: Local depth convolution-based Swim framework for classifying
multi-label histopathology images
- Authors: Liangrui Pan, Yutao Dou, Zhichao Feng, Liwen Xu, Shaoliang Peng
- Abstract summary: We propose a locally deep convolutional Swim framework (LDCSF) to classify multi-label histopathology images.
The classification accuracy of LDCSF for interstitial area, necrosis, non-tumor and tumor reached 0.9460, 0.9960, 0.9808, 0.9847, respectively.
- Score: 4.337832783226794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Histopathological images are the gold standard for diagnosing liver cancer.
However, the accuracy of fully digital diagnosis in computational pathology
needs to be improved. In this paper, in order to solve the problem of
multi-label and low classification accuracy of histopathology images, we
propose a locally deep convolutional Swim framework (LDCSF) to classify
multi-label histopathology images. In order to be able to provide local field
of view diagnostic results, we propose the LDCSF model, which consists of a
Swin transformer module, a local depth convolution (LDC) module, a feature
reconstruction (FR) module, and a ResNet module. The Swin transformer module
reduces the amount of computation generated by the attention mechanism by
limiting the attention to each window. The LDC then reconstructs the attention
map and performs convolution operations in multiple channels, passing the
resulting feature map to the next layer. The FR module uses the corresponding
weight coefficient vectors obtained from the channels to dot product with the
original feature map vector matrix to generate representative feature maps.
Finally, the residual network undertakes the final classification task. As a
result, the classification accuracy of LDCSF for interstitial area, necrosis,
non-tumor and tumor reached 0.9460, 0.9960, 0.9808, 0.9847, respectively.
Finally, we use the results of multi-label pathological image classification to
calculate the tumor-to-stromal ratio, which lays the foundation for the
analysis of the microenvironment of liver cancer histopathological images.
Second, we released a multilabel histopathology image of liver cancer, our code
and data are available at https://github.com/panliangrui/LSF.
Related papers
- GRU-Net: Gaussian Attention Aided Dense Skip Connection Based MultiResUNet for Breast Histopathology Image Segmentation [24.85210810502592]
This paper presents a modified version of MultiResU-Net for histopathology image segmentation.
It is selected as the backbone for its ability to analyze and segment complex features at multiple scales.
We validate our approach on two diverse breast cancer histopathology image datasets.
arXiv Detail & Related papers (2024-06-12T19:17:17Z) - MultiFusionNet: Multilayer Multimodal Fusion of Deep Neural Networks for
Chest X-Ray Image Classification [16.479941416339265]
Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification.
We propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them.
The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively.
arXiv Detail & Related papers (2024-01-01T11:50:01Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - Automated Prostate Cancer Diagnosis Based on Gleason Grading Using
Convolutional Neural Network [12.161266795282915]
We propose a convolutional neural network (CNN)-based automatic classification method for accurate grading of prostate cancer (PCa) using whole slide histopathology images.
A data augmentation method named Patch-Based Image Reconstruction (PBIR) was proposed to reduce the high resolution and increase the diversity of WSIs.
A distribution correction module was developed to enhance the adaption of pretrained model to the target dataset.
arXiv Detail & Related papers (2020-11-29T06:42:08Z) - Multiscale Detection of Cancerous Tissue in High Resolution Slide Scans [0.0]
We present an algorithm for multi-scale tumor (chimeric cell) detection in high resolution slide scans.
Our approach modifies the effective receptive field at different layers in a CNN so that objects with a broad range of varying scales can be detected in a single forward pass.
arXiv Detail & Related papers (2020-10-01T18:56:46Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Resource-Frugal Classification and Analysis of Pathology Slides Using
Image Entropy [0.0]
Histopathology slides of lung malignancies are classified using resource-frugal convolution neural networks (CNNs)
A lightweight CNN produces tile-level classifications that are aggregated to classify the slide.
color-coded probability maps are created by overlapping tiles and averaging the tile-level probabilities at a pixel level.
arXiv Detail & Related papers (2020-02-16T18:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.