Improved distinct bone segmentation in upper-body CT through
multi-resolution networks
- URL: http://arxiv.org/abs/2301.13674v1
- Date: Tue, 31 Jan 2023 14:46:16 GMT
- Title: Improved distinct bone segmentation in upper-body CT through
multi-resolution networks
- Authors: Eva Schnider, Julia Wolleb, Antal Huck, Mireille Toranelli, Georg
Rauter, Magdalena M\"uller-Gerbl, Philippe C. Cattin
- Abstract summary: In distinct bone segmentation from upper body CTs a large field of view and a computationally taxing 3D architecture are required.
This leads to low-resolution results lacking detail or localisation errors due to missing spatial context.
We propose end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions.
- Score: 0.39583175274885335
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Purpose: Automated distinct bone segmentation from CT scans is widely used in
planning and navigation workflows. U-Net variants are known to provide
excellent results in supervised semantic segmentation. However, in distinct
bone segmentation from upper body CTs a large field of view and a
computationally taxing 3D architecture are required. This leads to
low-resolution results lacking detail or localisation errors due to missing
spatial context when using high-resolution inputs.
Methods: We propose to solve this problem by using end-to-end trainable
segmentation networks that combine several 3D U-Nets working at different
resolutions. Our approach, which extends and generalizes HookNet and MRN,
captures spatial information at a lower resolution and skips the encoded
information to the target network, which operates on smaller high-resolution
inputs. We evaluated our proposed architecture against single resolution
networks and performed an ablation study on information concatenation and the
number of context networks.
Results: Our proposed best network achieves a median DSC of 0.86 taken over
all 125 segmented bone classes and reduces the confusion among similar-looking
bones in different locations. These results outperform our previously published
3D U-Net baseline results on the task and distinct-bone segmentation results
reported by other groups.
Conclusion: The presented multi-resolution 3D U-Nets address current
shortcomings in bone segmentation from upper-body CT scans by allowing for
capturing a larger field of view while avoiding the cubic growth of the input
pixels and intermediate computations that quickly outgrow the computational
capacities in 3D. The approach thus improves the accuracy and efficiency of
distinct bone segmentation from upper-body CT.
Related papers
- DDU-Net: A Domain Decomposition-based CNN for High-Resolution Image Segmentation on Multiple GPUs [46.873264197900916]
A domain decomposition-based U-Net architecture is introduced, which partitions input images into non-overlapping patches.
A communication network is added to facilitate inter-patch information exchange to enhance the understanding of spatial context.
Results show that the approach achieves a $2-3,%$ higher intersection over union (IoU) score compared to the same network without inter-patch communication.
arXiv Detail & Related papers (2024-07-31T01:07:21Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Pyramid Grafting Network for One-Stage High Resolution Saliency
Detection [29.013012579688347]
We propose a one-stage framework called Pyramid Grafting Network (PGNet) to extract features from different resolution images independently.
An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically.
We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions.
arXiv Detail & Related papers (2022-04-11T12:22:21Z) - MDA-Net: Multi-Dimensional Attention-Based Neural Network for 3D Image
Segmentation [4.221871357181261]
We propose a multi-dimensional attention network (MDA-Net) to efficiently integrate slice-wise, spatial, and channel-wise attention into a U-Net based network.
We evaluate our model on the MICCAI iSeg and IBSR datasets, and the experimental results demonstrate consistent improvements over existing methods.
arXiv Detail & Related papers (2021-05-10T16:58:34Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z) - KiU-Net: Overcomplete Convolutional Architectures for Biomedical Image
and Volumetric Segmentation [71.79090083883403]
"Traditional" encoder-decoder based approaches perform poorly in detecting smaller structures and are unable to segment boundary regions precisely.
We propose KiU-Net which has two branches: (1) an overcomplete convolutional network Kite-Net which learns to capture fine details and accurate edges of the input, and (2) U-Net which learns high level features.
The proposed method achieves a better performance as compared to all the recent methods with an additional benefit of fewer parameters and faster convergence.
arXiv Detail & Related papers (2020-10-04T19:23:33Z) - DDU-Nets: Distributed Dense Model for 3D MRI Brain Tumor Segmentation [27.547646527286886]
Three patterns of distributed dense connections (DDCs) are proposed to enhance feature reuse and propagation of CNNs.
For better detecting and segmenting brain tumors from 3D MR images, CNN-based models embedded with DDCs (DDU-Nets) are trained efficiently from pixel to pixel.
The proposed method is evaluated on the BraTS 2019 dataset with results demonstrating the effectiveness of the DDU-Nets.
arXiv Detail & Related papers (2020-03-03T05:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.