3D Teeth Reconstruction from Panoramic Radiographs using Neural Implicit
Functions
- URL: http://arxiv.org/abs/2311.16524v1
- Date: Tue, 28 Nov 2023 05:06:22 GMT
- Title: 3D Teeth Reconstruction from Panoramic Radiographs using Neural Implicit
Functions
- Authors: Sihwa Park, Seongjun Kim, In-Seok Song, Seung Jun Baek
- Abstract summary: Occudent is a framework for 3D teeth reconstruction from panoramic radiographs using neural implicit functions.
It is trained and validated with actual panoramic radiographs as input, distinct from recent works which used synthesized images.
- Score: 6.169259577480194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Panoramic radiography is a widely used imaging modality in dental practice
and research. However, it only provides flattened 2D images, which limits the
detailed assessment of dental structures. In this paper, we propose Occudent, a
framework for 3D teeth reconstruction from panoramic radiographs using neural
implicit functions, which, to the best of our knowledge, is the first work to
do so. For a given point in 3D space, the implicit function estimates whether
the point is occupied by a tooth, and thus implicitly determines the boundaries
of 3D tooth shapes. Firstly, Occudent applies multi-label segmentation to the
input panoramic radiograph. Next, tooth shape embeddings as well as tooth class
embeddings are generated from the segmentation outputs, which are fed to the
reconstruction network. A novel module called Conditional eXcitation (CX) is
proposed in order to effectively incorporate the combined shape and class
embeddings into the implicit function. The performance of Occudent is evaluated
using both quantitative and qualitative measures. Importantly, Occudent is
trained and validated with actual panoramic radiographs as input, distinct from
recent works which used synthesized images. Experiments demonstrate the
superiority of Occudent over state-of-the-art methods.
Related papers
- PX2Tooth: Reconstructing the 3D Point Cloud Teeth from a Single Panoramic X-ray [20.913080797758816]
We propose PX2Tooth, a novel approach to reconstruct 3D teeth using a single PX image with a two-stage framework.
First, we design the PXSegNet to segment the permanent teeth from the PX images, providing clear positional, morphological, and categorical information for each tooth.
Subsequently, we design a novel tooth generation network (TGNet) that learns to transform random point clouds into 3D teeth.
arXiv Detail & Related papers (2024-11-06T07:44:04Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image [35.72913439096702]
In orthodontic treatment, a full tooth model consisting of both the crown and root is indispensable.
In this paper, we propose a neural network, called ToothInpaintor, that takes as input a partial 3D dental model and a 2D panoramic image.
We successfully project an input to the learned latent space via neural optimization to obtain the full tooth model conditioned on the input.
arXiv Detail & Related papers (2022-11-25T18:15:22Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer [37.47317212620463]
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
Previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients.
We propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets.
arXiv Detail & Related papers (2022-10-29T15:20:54Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Oral-3D: Reconstructing the 3D Bone Structure of Oral Cavity from 2D
Panoramic X-ray [17.34835093235681]
We propose a framework, named Oral-3D, to reconstruct the 3D oral cavity from a single PX image and prior information of the dental arch.
We show that Oral-3D can efficiently and effectively reconstruct the 3D oral structure and show critical information in clinical applications.
arXiv Detail & Related papers (2020-03-18T18:02:57Z) - Pose-Aware Instance Segmentation Framework from Cone Beam CT Images for
Tooth Segmentation [9.880428545498662]
Individual tooth segmentation from cone beam computed tomography (CBCT) images is essential for an anatomical understanding of orthodontic structures.
The presence of severe metal artifacts in CBCT images hinders the accurate segmentation of each individual tooth.
We propose a neural network for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts.
arXiv Detail & Related papers (2020-02-06T07:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.