BFSM: 3D Bidirectional Face-Skull Morphable Model
- URL: http://arxiv.org/abs/2509.24577v1
- Date: Mon, 29 Sep 2025 10:34:13 GMT
- Title: BFSM: 3D Bidirectional Face-Skull Morphable Model
- Authors: Zidu Wang, Meng Xu, Miao Xu, Hengyuan Ma, Jiankuo Zhao, Xutao Li, Xiangyu Zhu, Zhen Lei,
- Abstract summary: Building a joint face-skull morphable model holds great potential for applications such as remote diagnostics, surgical planning, medical education, and physically based facial simulation.<n>However, realizing this vision is constrained by the scarcity of paired face-skull data, insufficient registration accuracy, and limited exploration of reconstruction and clinical applications.<n>We introduce the 3D Bidirectional Face-Skull Morphable Model (BFSM), which enables shape inference between the face and skull through a shared space.
- Score: 30.3163131796241
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Building a joint face-skull morphable model holds great potential for applications such as remote diagnostics, surgical planning, medical education, and physically based facial simulation. However, realizing this vision is constrained by the scarcity of paired face-skull data, insufficient registration accuracy, and limited exploration of reconstruction and clinical applications. Moreover, individuals with craniofacial deformities are often overlooked, resulting in underrepresentation and limited inclusivity. To address these challenges, we first construct a dataset comprising over 200 samples, including both normal cases and rare craniofacial conditions. Each case contains a CT-based skull, a CT-based face, and a high-fidelity textured face scan. Secondly, we propose a novel dense ray matching registration method that ensures topological consistency across face, skull, and their tissue correspondences. Based on this, we introduce the 3D Bidirectional Face-Skull Morphable Model (BFSM), which enables shape inference between the face and skull through a shared coefficient space, while also modeling tissue thickness variation to support one-to-many facial reconstructions from the same skull, reflecting individual changes such as fat over time. Finally, we demonstrate the potential of BFSM in medical applications, including 3D face-skull reconstruction from a single image and surgical planning prediction. Extensive experiments confirm the robustness and accuracy of our method. BFSM is available at https://github.com/wang-zidu/BFSM
Related papers
- FCR: Investigating Generative AI models for Forensic Craniofacial Reconstruction [2.9936254916060503]
We propose a generic framework for craniofacial reconstruction from 2D X-ray images.<n>This is the first time where 2D X-rays are being used as a representation of the skull by generative models for craniofacial reconstruction.<n>By experimental results, we have found that this can be an effective tool for forensic science.
arXiv Detail & Related papers (2025-08-25T13:52:59Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.<n>We propose OSDFace, a novel one-step diffusion model for face restoration.<n>Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Skull-to-Face: Anatomy-Guided 3D Facial Reconstruction and Editing [34.39385635485985]
Deducing the 3D face from a skull is a challenging task in forensic science and archaeology.<n>This paper proposes an end-to-end 3D face reconstruction pipeline and an exploration method.<n> Experiments conducted on a real skull-face dataset demonstrated the effectiveness of our proposed pipeline.
arXiv Detail & Related papers (2024-03-24T16:03:27Z) - FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.<n>Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.<n>Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric
Generator [42.25745590793068]
We present SCULPTOR, 3D face creations with Skeleton Consistency Using a Learned Parametric facial generaTOR.
At the core of SCULPTOR is LUCY, the first large-scale shape-skeleton face dataset in collaboration with plastic surgeons.
arXiv Detail & Related papers (2022-09-14T05:21:20Z) - Segmentation-Reconstruction-Guided Facial Image De-occlusion [48.952656891182826]
Occlusions are very common in face images in the wild, leading to the degraded performance of face-related tasks.
This paper proposes a novel face de-occlusion model based on face segmentation and 3D face reconstruction.
arXiv Detail & Related papers (2021-12-15T10:40:08Z) - Sphere Face Model:A 3D Morphable Model with Hypersphere Manifold Latent
Space [14.597212159819403]
We propose a novel 3DMM for monocular face reconstruction, which can preserve both shape fidelity and identity consistency.
The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes.
It produces fidelity face shapes, and the shapes are consistent in challenging conditions in monocular face reconstruction.
arXiv Detail & Related papers (2021-12-04T04:28:53Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.