OCU-Net: A Novel U-Net Architecture for Enhanced Oral Cancer
Segmentation
- URL: http://arxiv.org/abs/2310.02486v1
- Date: Tue, 3 Oct 2023 23:25:19 GMT
- Title: OCU-Net: A Novel U-Net Architecture for Enhanced Oral Cancer
Segmentation
- Authors: Ahmed Albishri, Syed Jawad Hussain Shah, Yugyung Lee, Rong Wang
- Abstract summary: This study proposes OCU-Net, a pioneering U-Net image segmentation architecture exclusively designed to detect oral cancer.
OCU-Net incorporates advanced deep learning modules, such as the Channel and Spatial Attention Fusion (CSAF) module.
The incorporation of these modules showed superior performance for oral cancer segmentation for two datasets used in this research.
- Score: 22.652902408898733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate detection of oral cancer is crucial for improving patient outcomes.
However, the field faces two key challenges: the scarcity of deep
learning-based image segmentation research specifically targeting oral cancer
and the lack of annotated data. Our study proposes OCU-Net, a pioneering U-Net
image segmentation architecture exclusively designed to detect oral cancer in
hematoxylin and eosin (H&E) stained image datasets. OCU-Net incorporates
advanced deep learning modules, such as the Channel and Spatial Attention
Fusion (CSAF) module, a novel and innovative feature that emphasizes important
channel and spatial areas in H&E images while exploring contextual information.
In addition, OCU-Net integrates other innovative components such as
Squeeze-and-Excite (SE) attention module, Atrous Spatial Pyramid Pooling (ASPP)
module, residual blocks, and multi-scale fusion. The incorporation of these
modules showed superior performance for oral cancer segmentation for two
datasets used in this research. Furthermore, we effectively utilized the
efficient ImageNet pre-trained MobileNet-V2 model as a backbone of our OCU-Net
to create OCU-Netm, an enhanced version achieving state-of-the-art results.
Comprehensive evaluation demonstrates that OCU-Net and OCU-Netm outperformed
existing segmentation methods, highlighting their precision in identifying
cancer cells in H&E images from OCDC and ORCA datasets.
Related papers
- AWGUNET: Attention-Aided Wavelet Guided U-Net for Nuclei Segmentation in Histopathology Images [26.333686941245197]
We present a segmentation approach that combines the U-Net architecture with a DenseNet-121 backbone.
Our model introduces the Wavelet-guided channel attention module to enhance cell boundary delineation.
The experimental results conducted on two publicly accessible histopathology datasets, namely Monuseg and TNBC, underscore the superiority of our proposed model.
arXiv Detail & Related papers (2024-06-12T17:10:27Z) - AG-CRC: Anatomy-Guided Colorectal Cancer Segmentation in CT with
Imperfect Anatomical Knowledge [9.961742312147674]
We develop a novel Anatomy-Guided segmentation framework to exploit the auto-generated organ masks.
We extensively evaluate the proposed method on two CRC segmentation datasets.
arXiv Detail & Related papers (2023-10-07T03:22:06Z) - Category Guided Attention Network for Brain Tumor Segmentation in MRI [6.685945448824158]
We propose a novel segmentation network named Category Guided Attention U-Net (CGA U-Net)
In this model, we design a Supervised Attention Module (SAM) based on the attention mechanism, which can capture more accurate and stable long-range dependency in feature maps without introducing much computational cost.
Experimental results on the BraTS 2019 datasets show that the proposed method outperformers the state-of-the-art algorithms in both segmentation performance and computational complexity.
arXiv Detail & Related papers (2022-03-29T09:22:29Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - RCA-IUnet: A residual cross-spatial attention guided inception U-Net
model for tumor segmentation in breast ultrasound imaging [0.6091702876917281]
The article introduces an efficient residual cross-spatial attention guided inception U-Net (RCA-IUnet) model with minimal training parameters for tumor segmentation.
The RCA-IUnet model follows U-Net topology with residual inception depth-wise separable convolution and hybrid pooling layers.
Cross-spatial attention filters are added to suppress the irrelevant features and focus on the target structure.
arXiv Detail & Related papers (2021-08-05T10:35:06Z) - Towards a Computed-Aided Diagnosis System in Colonoscopy: Automatic
Polyp Segmentation Using Convolution Neural Networks [10.930181796935734]
We present a deep learning framework for recognizing lesions in colonoscopy and capsule endoscopy images.
To our knowledge, we present the first work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
arXiv Detail & Related papers (2021-01-15T10:08:53Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - KiU-Net: Towards Accurate Segmentation of Biomedical Images using
Over-complete Representations [59.65174244047216]
We propose an over-complete architecture (Ki-Net) which involves projecting the data onto higher dimensions.
This network, when augmented with U-Net, results in significant improvements in the case of segmenting small anatomical landmarks.
We evaluate the proposed method on the task of brain anatomy segmentation from 2D Ultrasound of preterm neonates.
arXiv Detail & Related papers (2020-06-08T18:59:24Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.