3D Structure-guided Network for Tooth Alignment in 2D Photograph
- URL: http://arxiv.org/abs/2310.11106v2
- Date: Thu, 8 Aug 2024 04:03:27 GMT
- Title: 3D Structure-guided Network for Tooth Alignment in 2D Photograph
- Authors: Yulong Dou, Lanzhuju Mei, Dinggang Shen, Zhiming Cui,
- Abstract summary: A 2D photograph depicting aligned teeth prior to orthodontic treatment is crucial for effective dentist-patient communication.
We propose a 3D structure-guided tooth alignment network that takes 2D photographs as input and aligns the teeth within the 2D image space.
We evaluate our network on various facial photographs, demonstrating its exceptional performance and strong applicability within the orthodontic industry.
- Score: 47.51314162367702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Orthodontics focuses on rectifying misaligned teeth (i.e., malocclusions), affecting both masticatory function and aesthetics. However, orthodontic treatment often involves complex, lengthy procedures. As such, generating a 2D photograph depicting aligned teeth prior to orthodontic treatment is crucial for effective dentist-patient communication and, more importantly, for encouraging patients to accept orthodontic intervention. In this paper, we propose a 3D structure-guided tooth alignment network that takes 2D photographs as input (e.g., photos captured by smartphones) and aligns the teeth within the 2D image space to generate an orthodontic comparison photograph featuring aesthetically pleasing, aligned teeth. Notably, while the process operates within a 2D image space, our method employs 3D intra-oral scanning models collected in clinics to learn about orthodontic treatment, i.e., projecting the pre- and post-orthodontic 3D tooth structures onto 2D tooth contours, followed by a diffusion model to learn the mapping relationship. Ultimately, the aligned tooth contours are leveraged to guide the generation of a 2D photograph with aesthetically pleasing, aligned teeth and realistic textures. We evaluate our network on various facial photographs, demonstrating its exceptional performance and strong applicability within the orthodontic industry.
Related papers
- TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs [45.0864129371874]
We propose a 3D teeth reconstruction framework, named TeethDreamer, to restore the shape and position of the upper and lower teeth.
Given five intra-oral photographs, our approach first leverages a large diffusion model's prior knowledge to generate novel multi-view images.
To ensure the 3D consistency across generated views, we integrate a 3D-aware feature attention mechanism in the reverse diffusion process.
arXiv Detail & Related papers (2024-07-16T06:24:32Z) - SSR-2D: Semantic 3D Scene Reconstruction from 2D Images [54.46126685716471]
In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations.
The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images.
Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet.
arXiv Detail & Related papers (2023-02-07T17:47:52Z) - OrthoGAN:High-Precision Image Generation for Teeth Orthodontic
Visualization [17.647161676763478]
We build an efficient system for simulating virtual teeth alignment effects in a frontal facial image.
We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth.
arXiv Detail & Related papers (2022-12-29T03:12:47Z) - ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image [35.72913439096702]
In orthodontic treatment, a full tooth model consisting of both the crown and root is indispensable.
In this paper, we propose a neural network, called ToothInpaintor, that takes as input a partial 3D dental model and a 2D panoramic image.
We successfully project an input to the learned latent space via neural optimization to obtain the full tooth model conditioned on the input.
arXiv Detail & Related papers (2022-11-25T18:15:22Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - Teeth3DS: a benchmark for teeth segmentation and labeling from
intra-oral 3D scans [10.404680576890488]
This article introduces the first public benchmark, named Teeth3DS, which has been created in the frame of the 3DTeethSeg 2022 MICCAI challenge.
Teeth3DS is made of 1800 intra-oral scans collected from 900 patients covering the upper and lower jaws separately, acquired and validated by orthodontists/dental surgeons with more than 5 years of professional experience.
arXiv Detail & Related papers (2022-10-12T11:18:35Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - Oral-3D: Reconstructing the 3D Bone Structure of Oral Cavity from 2D
Panoramic X-ray [17.34835093235681]
We propose a framework, named Oral-3D, to reconstruct the 3D oral cavity from a single PX image and prior information of the dental arch.
We show that Oral-3D can efficiently and effectively reconstruct the 3D oral structure and show critical information in clinical applications.
arXiv Detail & Related papers (2020-03-18T18:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.