OrthoGAN:High-Precision Image Generation for Teeth Orthodontic
Visualization
- URL: http://arxiv.org/abs/2212.14162v1
- Date: Thu, 29 Dec 2022 03:12:47 GMT
- Title: OrthoGAN:High-Precision Image Generation for Teeth Orthodontic
Visualization
- Authors: Feihong Shen, JIngjing Liu, Haizhen Li, Bing Fang, Chenglong Ma, Jin
Hao, Yang Feng, Youyi Zheng
- Abstract summary: We build an efficient system for simulating virtual teeth alignment effects in a frontal facial image.
We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth.
- Score: 17.647161676763478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patients take care of what their teeth will be like after the orthodontics.
Orthodontists usually describe the expectation movement based on the original
smile images, which is unconvincing. The growth of deep-learning generative
models change this situation. It can visualize the outcome of orthodontic
treatment and help patients foresee their future teeth and facial appearance.
While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO)
at a profile level, the problem of simulating treatment outcome at a frontal
facial image is poorly explored. In this paper, we build an efficient and
accurate system for simulating virtual teeth alignment effects in a frontal
facial image. Our system takes a frontal face image of a patient with visible
malpositioned teeth and the patient's 3D scanned teeth model as input, and
progressively generates the visual results of the patient's teeth given the
specific orthodontics planning steps from the doctor (i.e., the specification
of translations and rotations of individual tooth). We design a multi-modal
encoder-decoder based generative model to synthesize identity-preserving
frontal facial images with aligned teeth. In addition, the original image color
information is used to optimize the orthodontic outcomes, making the results
more natural. We conduct extensive qualitative and clinical experiments and
also a pilot study to validate our method.
Related papers
- TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs [45.0864129371874]
We propose a 3D teeth reconstruction framework, named TeethDreamer, to restore the shape and position of the upper and lower teeth.
Given five intra-oral photographs, our approach first leverages a large diffusion model's prior knowledge to generate novel multi-view images.
To ensure the 3D consistency across generated views, we integrate a 3D-aware feature attention mechanism in the reverse diffusion process.
arXiv Detail & Related papers (2024-07-16T06:24:32Z) - FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.
Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - 3D Structure-guided Network for Tooth Alignment in 2D Photograph [47.51314162367702]
A 2D photograph depicting aligned teeth prior to orthodontic treatment is crucial for effective dentist-patient communication.
We propose a 3D structure-guided tooth alignment network that takes 2D photographs as input and aligns the teeth within the 2D image space.
We evaluate our network on various facial photographs, demonstrating its exceptional performance and strong applicability within the orthodontic industry.
arXiv Detail & Related papers (2023-10-17T09:44:30Z) - Construction of unbiased dental template and parametric dental model for
precision digital dentistry [46.459289444783956]
We develop an unbiased dental template by constructing an accurate dental atlas from CBCT images with guidance of teeth segmentation.
A total of 159 CBCT images of real subjects are collected to perform the constructions.
arXiv Detail & Related papers (2023-04-07T09:39:03Z) - ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image [35.72913439096702]
In orthodontic treatment, a full tooth model consisting of both the crown and root is indispensable.
In this paper, we propose a neural network, called ToothInpaintor, that takes as input a partial 3D dental model and a 2D panoramic image.
We successfully project an input to the learned latent space via neural optimization to obtain the full tooth model conditioned on the input.
arXiv Detail & Related papers (2022-11-25T18:15:22Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - A fully automated method for 3D individual tooth identification and
segmentation in dental CBCT [1.567576360103422]
This paper proposes a fully automated method of identifying and segmenting 3D individual teeth from dental CBCT images.
The proposed method addresses the aforementioned difficulty by developing a deep learning-based hierarchical multi-step model.
Experimental results showed that the proposed method achieved an F1-score of 93.35% for tooth identification and a Dice similarity coefficient of 94.79% for individual 3D tooth segmentation.
arXiv Detail & Related papers (2021-02-11T15:07:23Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Oral-3D: Reconstructing the 3D Bone Structure of Oral Cavity from 2D
Panoramic X-ray [17.34835093235681]
We propose a framework, named Oral-3D, to reconstruct the 3D oral cavity from a single PX image and prior information of the dental arch.
We show that Oral-3D can efficiently and effectively reconstruct the 3D oral structure and show critical information in clinical applications.
arXiv Detail & Related papers (2020-03-18T18:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.