ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image
- URL: http://arxiv.org/abs/2211.15502v1
- Date: Fri, 25 Nov 2022 18:15:22 GMT
- Title: ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image
- Authors: Yuezhi Yang, Zhiming Cui, Changjian Li, Wenping Wang
- Abstract summary: In orthodontic treatment, a full tooth model consisting of both the crown and root is indispensable.
In this paper, we propose a neural network, called ToothInpaintor, that takes as input a partial 3D dental model and a 2D panoramic image.
We successfully project an input to the learned latent space via neural optimization to obtain the full tooth model conditioned on the input.
- Score: 35.72913439096702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In orthodontic treatment, a full tooth model consisting of both the crown and
root is indispensable in making the treatment plan. However, acquiring tooth
root information to obtain the full tooth model from CBCT images is sometimes
restricted due to the massive radiation of CBCT scanning. Thus, reconstructing
the full tooth shape from the ready-to-use input, e.g., the partial intra-oral
scan and the 2D panoramic image, is an applicable and valuable solution. In
this paper, we propose a neural network, called ToothInpaintor, that takes as
input a partial 3D dental model and a 2D panoramic image and reconstructs the
full tooth model with high-quality root(s). Technically, we utilize the
implicit representation for both the 3D and 2D inputs, and learn a latent space
of the full tooth shapes. At test time, given an input, we successfully project
it to the learned latent space via neural optimization to obtain the full tooth
model conditioned on the input. To help find the robust projection, a novel
adversarial learning module is exploited in our pipeline. We extensively
evaluate our method on a dataset collected from real-world clinics. The
evaluation, comparison, and comprehensive ablation studies demonstrate that our
approach produces accurate complete tooth models robustly and outperforms the
state-of-the-art methods.
Related papers
- TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs [45.0864129371874]
We propose a 3D teeth reconstruction framework, named TeethDreamer, to restore the shape and position of the upper and lower teeth.
Given five intra-oral photographs, our approach first leverages a large diffusion model's prior knowledge to generate novel multi-view images.
To ensure the 3D consistency across generated views, we integrate a 3D-aware feature attention mechanism in the reverse diffusion process.
arXiv Detail & Related papers (2024-07-16T06:24:32Z) - 3D Structure-guided Network for Tooth Alignment in 2D Photograph [47.51314162367702]
A 2D photograph depicting aligned teeth prior to orthodontic treatment is crucial for effective dentist-patient communication.
We propose a 3D structure-guided tooth alignment network that takes 2D photographs as input and aligns the teeth within the 2D image space.
We evaluate our network on various facial photographs, demonstrating its exceptional performance and strong applicability within the orthodontic industry.
arXiv Detail & Related papers (2023-10-17T09:44:30Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - A fully automated method for 3D individual tooth identification and
segmentation in dental CBCT [1.567576360103422]
This paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images.
The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model.
Experimental results showed that the proposed method achieved an F1-score of 93.35% for tooth identification and a Dice similarity coefficient of 94.79% for individual 3D tooth segmentation.
arXiv Detail & Related papers (2021-02-11T15:07:23Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Oral-3D: Reconstructing the 3D Bone Structure of Oral Cavity from 2D
Panoramic X-ray [17.34835093235681]
We propose a framework, named Oral-3D, to reconstruct the 3D oral cavity from a single PX image and prior information of the dental arch.
We show that Oral-3D can efficiently and effectively reconstruct the 3D oral structure and show critical information in clinical applications.
arXiv Detail & Related papers (2020-03-18T18:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.