TransNuSeg: A Lightweight Multi-Task Transformer for Nuclei Segmentation
- URL: http://arxiv.org/abs/2307.08051v1
- Date: Sun, 16 Jul 2023 14:12:54 GMT
- Title: TransNuSeg: A Lightweight Multi-Task Transformer for Nuclei Segmentation
- Authors: Zhenqi He, Mathias Unberath, Jing Ke, Yiqing Shen
- Abstract summary: We make the first attempt at a pure Transformer framework for nuclei segmentation, called TransNuSeg.
To eliminate the divergent predictions from different branches in previous work, a novel self distillation loss is introduced to explicitly impose consistency regulation between branches.
Experiments on two datasets, including MoNuSeg, have shown that our methods can outperform state-of-the-art counterparts by 2-3% Dice with 30% fewer parameters.
- Score: 6.369485141013728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nuclei appear small in size, yet, in real clinical practice, the global
spatial information and correlation of the color or brightness contrast between
nuclei and background, have been considered a crucial component for accurate
nuclei segmentation. However, the field of automatic nuclei segmentation is
dominated by Convolutional Neural Networks (CNNs), meanwhile, the potential of
the recently prevalent Transformers has not been fully explored, which is
powerful in capturing local-global correlations. To this end, we make the first
attempt at a pure Transformer framework for nuclei segmentation, called
TransNuSeg. Different from prior work, we decouple the challenging nuclei
segmentation task into an intrinsic multi-task learning task, where a
tri-decoder structure is employed for nuclei instance, nuclei edge, and
clustered edge segmentation respectively. To eliminate the divergent
predictions from different branches in previous work, a novel self distillation
loss is introduced to explicitly impose consistency regulation between
branches. Moreover, to formulate the high correlation between branches and also
reduce the number of parameters, an efficient attention sharing scheme is
proposed by partially sharing the self-attention heads amongst the
tri-decoders. Finally, a token MLP bottleneck replaces the over-parameterized
Transformer bottleneck for a further reduction in model complexity. Experiments
on two datasets of different modalities, including MoNuSeg have shown that our
methods can outperform state-of-the-art counterparts such as CA2.5-Net by 2-3%
Dice with 30% fewer parameters. In conclusion, TransNuSeg confirms the strength
of Transformer in the context of nuclei segmentation, which thus can serve as
an efficient solution for real clinical practice. Code is available at
https://github.com/zhenqi-he/transnuseg.
Related papers
- Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data [2.207533492015563]
We present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics.
These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training.
We demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks.
arXiv Detail & Related papers (2023-03-01T09:27:08Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Full Transformer Framework for Robust Point Cloud Registration with Deep
Information Interaction [9.431484068349903]
Recent Transformer-based methods have achieved advanced performance in point cloud registration.
Recent CNNs fail to model global relations due to their local fields receptive.
shallow-wide architecture of Transformers and lack of positional encoding lead to indistinct feature extraction.
arXiv Detail & Related papers (2021-12-17T08:40:52Z) - Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei
Segmentation in Histopathology Images [65.47507533905188]
We propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately.
The newly proposed multitask learning architecture enhances the generalization by learning shared representation from three tasks.
The proposed bending loss defines high penalties to concave contour points with large curvatures, and applies small penalties to convex contour points with small curvatures.
arXiv Detail & Related papers (2021-09-30T17:29:44Z) - nnFormer: Interleaved Transformer for Volumetric Segmentation [50.10441845967601]
We introduce nnFormer, a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution.
nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC.
arXiv Detail & Related papers (2021-09-07T17:08:24Z) - A Multi-Branch Hybrid Transformer Networkfor Corneal Endothelial Cell
Segmentation [28.761569157861018]
Corneal endothelial cell segmentation plays a vital role inquantifying clinical indicators such as cell density, coefficient of variation,and hexagonality.
Due to the limited receptive field oflocal convolution and continuous downsampling, the existing deep learn-ing segmentation methods cannot make full use of global context.
This paper proposes a Multi-Branch hybrid Trans-former Network (MBT-Net) based on the transformer and body-edgebranch.
arXiv Detail & Related papers (2021-05-21T07:31:09Z) - DyCo3D: Robust Instance Segmentation of 3D Point Clouds through Dynamic
Convolution [136.7261709896713]
We propose a data-driven approach that generates the appropriate convolution kernels to apply in response to the nature of the instances.
The proposed method achieves promising results on both ScanetNetV2 and S3DIS.
It also improves inference speed by more than 25% over the current state-of-the-art.
arXiv Detail & Related papers (2020-11-26T14:56:57Z) - Boundary-assisted Region Proposal Networks for Nucleus Segmentation [89.69059532088129]
Machine learning models cannot perform well because of large amount of crowded nuclei.
We devise a Boundary-assisted Region Proposal Network (BRP-Net) that achieves robust instance-level nucleus segmentation.
arXiv Detail & Related papers (2020-06-04T08:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.