HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task
U-Net for Accurate Prostate Segmentation
- URL: http://arxiv.org/abs/2005.10439v2
- Date: Sat, 23 May 2020 13:26:25 GMT
- Title: HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task
U-Net for Accurate Prostate Segmentation
- Authors: Kelei He, Chunfeng Lian, Bing Zhang, Xin Zhang, Xiaohuan Cao, Dong
Nie, Yang Gao, Junfeng Zhang, Dinggang Shen
- Abstract summary: We tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate.
To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary.
By contrast, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificities of different tasks are
- Score: 56.86396352441269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of the prostate is a key step in external beam
radiation therapy treatments. In this paper, we tackle the challenging task of
prostate segmentation in CT images by a two-stage network with 1) the first
stage to fast localize, and 2) the second stage to accurately segment the
prostate. To precisely segment the prostate in the second stage, we formulate
prostate segmentation into a multi-task learning framework, which includes a
main task to segment the prostate, and an auxiliary task to delineate the
prostate boundary. Here, the second task is applied to provide additional
guidance of unclear prostate boundary in CT images. Besides, the conventional
multi-task deep networks typically share most of the parameters (i.e., feature
representations) across all tasks, which may limit their data fitting ability,
as the specificities of different tasks are inevitably ignored. By contrast, we
solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The
HF-UNet has two complementary branches for two tasks, with the novel proposed
attention-based task consistency learning block to communicate at each level
between the two decoding branches. Therefore, HF-UNet endows the ability to
learn hierarchically the shared representations for different tasks, and
preserve the specificities of learned representations for different tasks
simultaneously. We did extensive evaluations of the proposed method on a large
planning CT image dataset, including images acquired from 339 patients. The
experimental results show HF-UNet outperforms the conventional multi-task
network architectures and the state-of-the-art methods.
Related papers
- YOLO-MED : Multi-Task Interaction Network for Biomedical Images [18.535117490442953]
YOLO-Med is an efficient end-to-end multi-task network capable of concurrently performing object detection and semantic segmentation.
Our model exhibits promising results in balancing accuracy and speed when evaluated on the Kvasir-seg dataset and a private biomedical image dataset.
arXiv Detail & Related papers (2024-03-01T03:20:42Z) - Multi-task Learning To Improve Semantic Segmentation Of CBCT Scans Using
Image Reconstruction [0.8739101659113155]
We aim to improve automated segmentation in CBCTs through multi-task learning.
To improve segmentation, two approaches are investigated. First, we perform multi-task learning to add morphology based regularization.
Second, we use this reconstruction task to reconstruct the best quality CBCT.
arXiv Detail & Related papers (2023-12-20T12:48:18Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Histogram of Oriented Gradients Meet Deep Learning: A Novel Multi-task
Deep Network for Medical Image Semantic Segmentation [18.066680957993494]
We present our novel deep multi-task learning method for medical image segmentation.
We generate the pseudo-labels of an auxiliary task in an unsupervised manner.
Our method consistently improves the performance compared to the counter-part method.
arXiv Detail & Related papers (2022-04-02T23:50:29Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Semi-supervised Medical Image Segmentation through Dual-task Consistency [18.18484640332254]
We propose a novel dual-task deep network that jointly predicts a pixel-wise segmentation map and a geometry-aware level set representation of the target.
Our method can largely improve the performance by incorporating the unlabeled data.
Our framework outperforms the state-of-the-art semi-supervised medical image segmentation methods.
arXiv Detail & Related papers (2020-09-09T17:49:21Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.