TC-Net: Triple Context Network for Automated Stroke Lesion Segmentation
- URL: http://arxiv.org/abs/2202.13687v1
- Date: Mon, 28 Feb 2022 11:12:16 GMT
- Title: TC-Net: Triple Context Network for Automated Stroke Lesion Segmentation
- Authors: Xiuquan Du, Kunpeng Ma
- Abstract summary: We propose a new network, Triple Context Network (TC-Net), with the capture of spatial contextual information as the core.
Our network is evaluated on the open dataset ATLAS, achieving the highest score of 0.594, Hausdorff distance of 27.005 mm, and average symmetry surface distance of 7.137 mm.
- Score: 0.5482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate lesion segmentation plays a key role in the clinical mapping of
stroke. Convolutional neural network (CNN) approaches based on U-shaped
structures have achieved remarkable performance in this task. However, the
single-stage encoder-decoder unresolvable the inter-class similarity due to the
inadequate utilization of contextual information, such as lesion-tissue
similarity. In addition, most approaches use fine-grained spatial attention to
capture spatial context information, yet fail to generate accurate attention
maps in encoding stage and lack effective regularization. In this work, we
propose a new network, Triple Context Network (TC-Net), with the capture of
spatial contextual information as the core. We firstly design a coarse-grained
patch attention module to generate patch-level attention maps in the encoding
stage to distinguish targets from patches and learn target-specific detail
features. Then, to enrich the representation of boundary information of these
features, a cross-feature fusion module with global contextual information is
explored to guide the selective aggregation of 2D and 3D feature maps, which
compensates for the lack of boundary learning capability of 2D convolution.
Finally, we use multi-scale deconvolution instead of linear interpolation to
enhance the recovery of target space and boundary information in the decoding
stage. Our network is evaluated on the open dataset ATLAS, achieving the
highest DSC score of 0.594, Hausdorff distance of 27.005 mm, and average
symmetry surface distance of 7.137 mm, where our proposed method outperforms
other state-of-the-art methods.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - ELA: Efficient Local Attention for Deep Convolutional Neural Networks [15.976475674061287]
This paper introduces an Efficient Local Attention (ELA) method that achieves substantial performance improvements with a simple structure.
To overcome these challenges, we propose the incorporation of 1D convolution and Group Normalization feature enhancement techniques.
ELA can be seamlessly integrated into deep CNN networks such as ResNet, MobileNet, and DeepLab.
arXiv Detail & Related papers (2024-03-02T08:06:18Z) - Structure Aware and Class Balanced 3D Object Detection on nuScenes
Dataset [0.0]
NuTonomy's nuScenes dataset greatly extends commonly used datasets such as KITTI.
The localization precision of this model is affected by the loss of spatial information in the downscaled feature maps.
We propose to enhance the performance of the CBGS model by designing an auxiliary network, that makes full use of the structure information of the 3D point cloud.
arXiv Detail & Related papers (2022-05-25T06:18:49Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Self-semantic contour adaptation for cross modality brain tumor
segmentation [13.260109561599904]
We propose exploiting low-level edge information to facilitate the adaptation as a precursor task.
The precise contour then provides spatial information to guide the semantic adaptation.
We evaluate our framework on the BraTS2018 database for cross-modality segmentation of brain tumors.
arXiv Detail & Related papers (2022-01-13T15:16:55Z) - Residual Moment Loss for Medical Image Segmentation [56.72261489147506]
Location information is proven to benefit the deep learning models on capturing the manifold structure of target objects.
Most existing methods encode the location information in an implicit way, for the network to learn.
We propose a novel loss function, namely residual moment (RM) loss, to explicitly embed the location information of segmentation targets.
arXiv Detail & Related papers (2021-06-27T09:31:49Z) - S3Net: 3D LiDAR Sparse Semantic Segmentation Network [1.330528227599978]
S3Net is a novel convolutional neural network for LiDAR point cloud semantic segmentation.
It adopts an encoder-decoder backbone that consists of Sparse Intra-channel Attention Module (SIntraAM) and Sparse Inter-channel Attention Module (SInterAM)
arXiv Detail & Related papers (2021-03-15T22:15:24Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Joint Left Atrial Segmentation and Scar Quantification Based on a DNN
with Spatial Encoding and Shape Attention [21.310508988246937]
We propose an end-to-end deep neural network (DNN) which can simultaneously segment the left atrial (LA) cavity and quantify LA scars.
The proposed framework incorporates the continuous spatial information of the target by introducing a spatially encoded (SE) loss.
For LA segmentation, the proposed method reduced the mean Hausdorff distance from 36.4 mm to 20.0 mm compared to the 3D basic U-Net.
arXiv Detail & Related papers (2020-06-23T13:55:29Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z) - Cross-layer Feature Pyramid Network for Salient Object Detection [102.20031050972429]
We propose a novel Cross-layer Feature Pyramid Network to improve the progressive fusion in salient object detection.
The distributed features per layer own both semantics and salient details from all other layers simultaneously, and suffer reduced loss of important information.
arXiv Detail & Related papers (2020-02-25T14:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.