High-Fidelity 3D Tooth Reconstruction by Fusing Intraoral Scans and CBCT Data via a Deep Implicit Representation
- URL: http://arxiv.org/abs/2601.15358v1
- Date: Wed, 21 Jan 2026 09:40:56 GMT
- Title: High-Fidelity 3D Tooth Reconstruction by Fusing Intraoral Scans and CBCT Data via a Deep Implicit Representation
- Authors: Yi Zhu, Razmig Kechichian, Raphaël Richert, Satoshi Ikehata, Sébastien Valette,
- Abstract summary: We propose a novel, fully-automated pipeline that fuses CBCT and IOS data using a deep implicit representation.<n>Our method first segments and robustly registers the tooth instances, then creates a hybrid proxy mesh combining the IOS crown and the CBCT root.<n>This optimization process projects the input onto a learned manifold of ideal tooth shapes, generating a seamless, watertight, and anatomically coherent model.
- Score: 16.263041213151613
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High-fidelity 3D tooth models are essential for digital dentistry, but must capture both the detailed crown and the complete root. Clinical imaging modalities are limited: Cone-Beam Computed Tomography (CBCT) captures the root but has a noisy, low-resolution crown, while Intraoral Scanners (IOS) provide a high-fidelity crown but no root information. A naive fusion of these sources results in unnatural seams and artifacts. We propose a novel, fully-automated pipeline that fuses CBCT and IOS data using a deep implicit representation. Our method first segments and robustly registers the tooth instances, then creates a hybrid proxy mesh combining the IOS crown and the CBCT root. The core of our approach is to use this noisy proxy to guide a class-specific DeepSDF network. This optimization process projects the input onto a learned manifold of ideal tooth shapes, generating a seamless, watertight, and anatomically coherent model. Qualitative and quantitative evaluations show our method uniquely preserves both the high-fidelity crown from IOS and the patient-specific root morphology from CBCT, overcoming the limitations of each modality and naive stitching.
Related papers
- Adapting Foundation Model for Dental Caries Detection with Dual-View Co-Training [53.77904429789069]
We present Attention-TNet, a novel Dual-View Co-Training network for accurate dental caries detection.<n>OurTNet starts with employing automated tooth detection to establish two complementary views: a global view from panoramic X-ray images and a local view from cropped tooth images.<n>To effectively integrate information from both views, we introduce a Gated Cross-View module.
arXiv Detail & Related papers (2025-08-28T14:13:26Z) - Tooth-Diffusion: Guided 3D CBCT Synthesis with Fine-Grained Tooth Conditioning [0.0]
We propose a conditional diffusion framework for 3D dental volume generation guided by tooth-level binary attributes.<n>Our approach integrates wavelet-based denoising diffusion, FiLM conditioning, and masked loss functions to focus learning on relevant anatomical structures.<n>Results show strong fidelity and generalization with low FID scores, robust inpainting performance, and SSIM values above 0.91 even on unseen scans.
arXiv Detail & Related papers (2025-08-19T21:21:35Z) - GEPAR3D: Geometry Prior-Assisted Learning for 3D Tooth Segmentation [0.15487122608774898]
Tooth segmentation in Cone-Beam Computed Tomography (CBCT) remains challenging.<n>We introduce GEPAR3D, a novel approach that unifies instance detection and multi-class segmentation into a single step to improve root segmentation.<n>We leverage a deep watershed method, modeling each tooth as a continuous 3D energy basin encoding voxel distances to boundaries.
arXiv Detail & Related papers (2025-07-31T20:46:58Z) - Boundary feature fusion network for tooth image segmentation [7.554733074482215]
This paper introduces an innovative tooth segmentation network that integrates boundary information to address the issue of indistinct boundaries between teeth and adjacent tissues.
In the most recent STS Data Challenge, our methodology was rigorously tested and received a commendable overall score of 0.91.
arXiv Detail & Related papers (2024-09-06T02:12:21Z) - TSegFormer: 3D Tooth Segmentation in Intraoral Scans with Geometry
Guided Transformer [47.18526074157094]
Optical Intraoral Scanners (IOSs) are widely used in digital dentistry to provide detailed 3D information of dental crowns and the gingiva.
Previous methods are error-prone at complicated boundaries and exhibit unsatisfactory results across patients.
We propose TSegFormer which captures both local and global dependencies among different teeth and the gingiva in the IOS point clouds with a multi-task 3D transformer architecture.
arXiv Detail & Related papers (2023-11-22T08:45:01Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - An Implicit Parametric Morphable Dental Model [79.29420177904022]
We present the first parametric 3D morphable dental model for both teeth and gum.
It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications.
arXiv Detail & Related papers (2022-11-21T12:23:54Z) - TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer [37.47317212620463]
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry, providing 3-Dimensional (3D) and high-resolution geometrical information of dental crowns and the gingiva.
Previous methods are error-prone in complicated tooth-tooth or tooth-gingiva boundaries, and usually exhibit unsatisfactory results across various patients.
We propose a novel method based on 3D transformer architectures that is evaluated with large-scale and high-resolution 3D IOS datasets.
arXiv Detail & Related papers (2022-10-29T15:20:54Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral
Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications [29.065668174732014]
A critical step in virtual dental treatment planning is to accurately delineate all tooth-bone structures from CBCT.
Previous studies have established several methods for CBCT segmentation using deep learning.
Here, we present a Deep Dental Multimodal Analysis framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model, and a fusion model to generate 3D fused crown-root-bone structures.
arXiv Detail & Related papers (2022-03-11T07:50:15Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.