3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images
- URL: http://arxiv.org/abs/2309.11015v3
- Date: Wed, 28 Feb 2024 04:35:38 GMT
- Title: 3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images
- Authors: Yifu Zhang and Zuozhu Liu and Yang Feng and Renjing Xu
- Abstract summary: We propose a novel 3D-U-SAM network for 3D dental image segmentation.
In order to solve the problem of using 2D pre-trained weights on 3D datasets, we adopted a convolution approximation method.
The effectiveness of the proposed method is demonstrated in ablation experiments, comparison experiments, and sample size experiments.
- Score: 22.86724024199165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate representation of tooth position is extremely important in
treatment. 3D dental image segmentation is a widely used method, however
labelled 3D dental datasets are a scarce resource, leading to the problem of
small samples that this task faces in many cases. To this end, we address this
problem with a pretrained SAM and propose a novel 3D-U-SAM network for 3D
dental image segmentation. Specifically, in order to solve the problem of using
2D pre-trained weights on 3D datasets, we adopted a convolution approximation
method; in order to retain more details, we designed skip connections to fuse
features at all levels with reference to U-Net. The effectiveness of the
proposed method is demonstrated in ablation experiments, comparison
experiments, and sample size experiments.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - A Multi-Stage Framework for 3D Individual Tooth Segmentation in Dental CBCT [7.6057981800052845]
Cone beam computed tomography (CBCT) is a common way of diagnosing dental diseases.
Deep learning based methods have achieved convincing results in medical image processing.
We propose a multi-stage framework for 3D tooth related generalization in dental CBCT.
arXiv Detail & Related papers (2024-07-15T04:23:28Z) - Cross-Dimensional Medical Self-Supervised Representation Learning Based on a Pseudo-3D Transformation [68.60747298865394]
We propose a new cross-dimensional SSL framework based on a pseudo-3D transformation (CDSSL-P3D)
Specifically, we introduce an image transformation based on the im2col algorithm, which converts 2D images into a format consistent with 3D data.
This transformation enables seamless integration of 2D and 3D data, and facilitates cross-dimensional self-supervised learning for 3D medical image analysis.
arXiv Detail & Related papers (2024-06-03T02:57:25Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer [37.47317212620463]
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
Previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients.
We propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets.
arXiv Detail & Related papers (2022-10-29T15:20:54Z) - Segmentation of 3D Dental Images Using Deep Learning [0.0]
3D image segmentation is a recent and crucial step in many medical analysis and recognition schemes.
This paper provides a multi-phase Deep Learning-based system that hybridizes various efficient methods in order to get the best 3D segmentation output.
arXiv Detail & Related papers (2022-07-19T23:17:54Z) - CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume
Segmentation on Cone Beam Computed Tomography Images [19.79983193894742]
3D tooth segmentation is a prerequisite for computer-aided dental diagnosis and treatment.
Deep learning-based segmentation methods produce convincing results, but it requires a large quantity of ground truth for training.
In this paper, we establish a fully annotated cone beam computed tomography dataset CTooth with tooth gold standard.
arXiv Detail & Related papers (2022-06-17T13:48:35Z) - A fully automated method for 3D individual tooth identification and
segmentation in dental CBCT [1.567576360103422]
This paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images.
The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model.
Experimental results showed that the proposed method achieved an F1-score of 93.35% for tooth identification and a Dice similarity coefficient of 94.79% for individual 3D tooth segmentation.
arXiv Detail & Related papers (2021-02-11T15:07:23Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine
Framework and Its Adversarial Examples [74.92488215859991]
We propose a novel 3D-based coarse-to-fine framework to efficiently tackle these challenges.
The proposed 3D-based framework outperforms their 2D counterparts by a large margin since it can leverage the rich spatial information along all three axes.
We conduct experiments on three datasets, the NIH pancreas dataset, the JHMI pancreas dataset and the JHMI pathological cyst dataset.
arXiv Detail & Related papers (2020-10-29T15:39:19Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.