Deep Learning-based Facial Appearance Simulation Driven by Surgically
Planned Craniomaxillofacial Bony Movement
- URL: http://arxiv.org/abs/2210.01685v1
- Date: Tue, 4 Oct 2022 15:33:01 GMT
- Title: Deep Learning-based Facial Appearance Simulation Driven by Surgically
Planned Craniomaxillofacial Bony Movement
- Authors: Xi Fang, Daeseung Kim, Xuanang Xu, Tianshu Kuang, Hannah H. Deng,
Joshua C. Barber, Nathan Lampen, Jaime Gateno, Michael A.K. Liebschner, James
J. Xia, Pingkun Yan
- Abstract summary: We propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to estimate the facial appearance.
We show that our proposed method can achieve comparable facial change prediction accuracy compared with the state-of-the-art FEM-based approach.
- Score: 13.663130604042278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulating facial appearance change following bony movement is a critical
step in orthognathic surgical planning for patients with jaw deformities.
Conventional biomechanics-based methods such as the finite-element method (FEM)
are labor intensive and computationally inefficient. Deep learning-based
approaches can be promising alternatives due to their high computational
efficiency and strong modeling capability. However, the existing deep
learning-based method ignores the physical correspondence between facial soft
tissue and bony segments and thus is significantly less accurate compared to
FEM. In this work, we propose an Attentive Correspondence assisted Movement
Transformation network (ACMT-Net) to estimate the facial appearance by
transforming the bony movement to facial soft tissue through a point-to-point
attentive correspondence matrix. Experimental results on patients with jaw
deformity show that our proposed method can achieve comparable facial change
prediction accuracy compared with the state-of-the-art FEM-based approach with
significantly improved computational efficiency.
Related papers
- CFCPalsy: Facial Image Synthesis with Cross-Fusion Cycle Diffusion Model for Facial Paralysis Individuals [3.2688425993442696]
This study aims to synthesize a high-quality facial paralysis dataset to address this gap.
A novel Cross-Fusion Cycle Palsy Expression Generative Model (PalsyCFC) based on the diffusion model is proposed.
We have qualitatively and quantitatively evaluated the proposed method on the commonly used public clinical datasets of facial paralysis.
arXiv Detail & Related papers (2024-09-11T13:46:35Z) - Enhanced Knee Kinematics: Leveraging Deep Learning and Morphing Algorithms for 3D Implant Modeling [2.752817022620644]
This study proposes a novel approach using machine learning algorithms and morphing techniques for precise 3D reconstruction of implanted knee models.
A convolutional neural network is trained to automatically segment the femur contour of the implanted components.
A morphing algorithm generates a personalized 3D model of the implanted knee joint.
arXiv Detail & Related papers (2024-08-02T20:11:04Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - Soft-tissue Driven Craniomaxillofacial Surgical Planning [13.663130604042278]
In CMF surgery, the planning of bony movement to achieve a desired facial outcome is a challenging task.
We propose a soft-tissue driven framework that can automatically create and verify surgical plans.
Our framework consists of a bony planner network that estimates the bony movements required to achieve the desired facial outcome and a facial simulator network that can simulate the possible facial changes resulting from the estimated bony movement plans.
arXiv Detail & Related papers (2023-07-20T15:26:01Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - A Self-Supervised Deep Framework for Reference Bony Shape Estimation in
Orthognathic Surgical Planning [55.30223654196882]
A virtual orthognathic surgical planning involves simulating surgical corrections of jaw deformities on 3D facial bony shape models.
A reference facial bony shape model representing normal anatomies can provide an objective guidance to improve planning accuracy.
We propose a self-supervised deep framework to automatically estimate reference facial bony shape models.
arXiv Detail & Related papers (2021-09-11T05:24:40Z) - PhysGNN: A Physics-Driven Graph Neural Network Based Model for
Predicting Soft Tissue Deformation in Image-Guided Neurosurgery [0.15229257192293202]
We propose a data-driven model that approximates the solution of finite element analysis (FEA) by leveraging graph neural networks (GNNs)
We demonstrate that the proposed architecture, PhysGNN, promises accurate and fast soft tissue deformation approximations while remaining computationally feasible, suitable for neurosurgical settings.
arXiv Detail & Related papers (2021-09-09T15:43:59Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Leveraging Vision and Kinematics Data to Improve Realism of Biomechanic
Soft-tissue Simulation for Robotic Surgery [13.657060682152409]
We investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results.
We use an open-source da Vinci Surgical System to probe a soft-tissue phantom and replay the interaction in simulation.
We train the network to correct for the difference between the predicted mesh position and the measured point cloud.
arXiv Detail & Related papers (2020-03-14T00:16:08Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.