A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation
- URL: http://arxiv.org/abs/2001.00339v3
- Date: Tue, 9 Mar 2021 12:49:56 GMT
- Title: A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation
- Authors: Yuanyuan Lyu, Haofu Liao, Heqin Zhu, S. Kevin Zhou
- Abstract summary: CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
- Score: 18.500206499468902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spinal surgery planning necessitates automatic segmentation of vertebrae in
cone-beam computed tomography (CBCT), an intraoperative imaging modality that
is widely used in intervention. However, CBCT images are of low-quality and
artifact-laden due to noise, poor tissue contrast, and the presence of metallic
objects, causing vertebra segmentation, even manually, a demanding task. In
contrast, there exists a wealth of artifact-free, high quality CT images with
vertebra annotations. This motivates us to build a CBCT vertebra segmentation
model using unpaired CT images with annotations. To overcome the domain and
artifact gaps between CBCT and CT, it is a must to address the three
heterogeneous tasks of vertebra segmentation, artifact reduction and modality
translation all together. To this, we propose a novel anatomy-aware artifact
disentanglement and segmentation network (A$^3$DSegNet) that intensively
leverages knowledge sharing of these three tasks to promote learning.
Specifically, it takes a random pair of CBCT and CT images as the input and
manipulates the synthesis and segmentation via different decoding combinations
from the disentangled latent layers. Then, by proposing various forms of
consistency among the synthesized images and among segmented vertebrae, the
learning is achieved without paired (i.e., anatomically identical) data.
Finally, we stack 2D slices together and build 3D networks on top to obtain
final 3D segmentation result. Extensive experiments on a large number of
clinical CBCT (21,364) and CT (17,089) images show that the proposed
A$^3$DSegNet performs significantly better than state-of-the-art competing
methods trained independently for each task and, remarkably, it achieves an
average Dice coefficient of 0.926 for unpaired 3D CBCT vertebra segmentation.
Related papers
- Attention-based CT Scan Interpolation for Lesion Segmentation of
Colorectal Liver Metastases [2.680862925538592]
Small liver lesions common to colorectal liver (CRLMs) are challenging for convolutional neural network (CNN) segmentation models.
We propose an unsupervised attention-based model to generate intermediate slices from consecutive triplet slices in CT scans.
Our model's outputs are consistent with the original input slices while increasing the segmentation performance in two cutting-edge 3D segmentation pipelines.
arXiv Detail & Related papers (2023-08-30T10:21:57Z) - Denoising diffusion-based MRI to CT image translation enables automated
spinal segmentation [8.094450260464354]
This retrospective study involved translating T1w and T2w MR image series into CT images in a total of n=263 pairs of CT/MR series.
Two landmarks per vertebra registration enabled paired image-to-image translation from MR to CT and outperformed all unpaired approaches.
arXiv Detail & Related papers (2023-08-18T07:07:15Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Self-supervised 3D anatomy segmentation using self-distilled masked
image transformer (SMIT) [2.7298989068857487]
Self-supervised learning has demonstrated success in medical image segmentation using convolutional networks.
We show our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks.
arXiv Detail & Related papers (2022-05-20T17:55:14Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - QU-net++: Image Quality Detection Framework for Segmentation of 3D
Medical Image Stacks [0.9594432031144714]
We propose an automated two-step method that evaluates the quality of medical images from 3D image stacks using a U-net++ model.
Images detected can then be used to further fine tune the U-net++ model for semantic segmentation.
arXiv Detail & Related papers (2021-10-27T05:28:02Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Three-dimensional Segmentation of the Scoliotic Spine from MRI using
Unsupervised Volume-based MR-CT Synthesis [3.6273410177512275]
We present an unsupervised, fully three-dimensional (3D) cross-modality synthesis method for segmenting scoliotic spines.
A 3D CycleGAN model is trained for an unpaired volume-to-volume translation across MR and CT domains.
The resulting segmentation is used to reconstruct a 3D model of the spine.
arXiv Detail & Related papers (2020-11-25T18:34:52Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.