Cephalometric Landmark Regression with Convolutional Neural Networks on
3D Computed Tomography Data
- URL: http://arxiv.org/abs/2007.10052v1
- Date: Mon, 20 Jul 2020 12:45:38 GMT
- Title: Cephalometric Landmark Regression with Convolutional Neural Networks on
3D Computed Tomography Data
- Authors: Dmitry Lachinov, Alexandra Getmanskaya and Vadim Turlapov
- Abstract summary: Cephalometric analysis performed on lateral radiographs doesn't fully exploit the structure of 3D objects due to projection onto the lateral plane.
We present a series of experiments with state of the art 3D convolutional neural network (CNN) based methods for keypoint regression.
For the first time, we extensively evaluate the described methods and demonstrate their effectiveness in the estimation of Frankfort Horizontal and cephalometric points locations.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the problem of automatic three-dimensional
cephalometric analysis. Cephalometric analysis performed on lateral radiographs
doesn't fully exploit the structure of 3D objects due to projection onto the
lateral plane. With the development of three-dimensional imaging techniques
such as CT, several analysis methods have been proposed that extend to the 3D
case. The analysis based on these methods is invariant to rotations and
translations and can describe difficult skull deformation, where 2D
cephalometry has no use. In this paper, we provide a wide overview of existing
approaches for cephalometric landmark regression. Moreover, we perform a series
of experiments with state of the art 3D convolutional neural network (CNN)
based methods for keypoint regression: direct regression with CNN, heatmap
regression and Softargmax regression. For the first time, we extensively
evaluate the described methods and demonstrate their effectiveness in the
estimation of Frankfort Horizontal and cephalometric points locations for
patients with severe skull deformations. We demonstrate that Heatmap and
Softargmax regression models provide sufficient regression error for medical
applications (less than 4 mm). Moreover, the Softargmax model achieves 1.15o
inclination error for the Frankfort horizontal. For the fair comparison with
the prior art, we also report results projected on the lateral plane.
Related papers
- Intraoperative 2D/3D Image Registration via Differentiable X-ray Rendering [5.617649111108429]
We present DiffPose, a self-supervised approach that leverages patient-specific simulation and differentiable physics-based rendering to achieve accurate 2D/3D registration without relying on manually labeled data.
DiffPose achieves sub-millimeter accuracy across surgical datasets at intraoperative speeds, improving upon existing unsupervised methods by an order of magnitude and even outperforming supervised baselines.
arXiv Detail & Related papers (2023-12-11T13:05:54Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Mesh convolutional neural networks for wall shear stress estimation in
3D artery models [7.7393800633675465]
We propose to use mesh convolutional neural networks that directly operate on the same finite-element surface mesh as used in CFD.
We show that our flexible deep learning model can accurately predict 3D wall shear stress vectors on this surface mesh.
arXiv Detail & Related papers (2021-09-10T11:32:05Z) - Automated 3D cephalometric landmark identification using computerized
tomography [1.4349468613117398]
Identification of 3D cephalometric landmarks that serve as proxy to the shape of human skull is the fundamental step in cephalometric analysis.
Recently, automatic landmarking of 2D cephalograms using deep learning (DL) has achieved great success, but 3D landmarking for more than 80 landmarks has not yet reached a satisfactory level.
This paper presents a semi-supervised DL method for 3D landmarking that takes advantage of anonymized landmark dataset with paired CT data being removed.
arXiv Detail & Related papers (2020-12-16T07:29:32Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Height estimation from single aerial images using a deep ordinal
regression network [12.991266182762597]
We deal with the ambiguous and unsolved problem of height estimation from a single aerial image.
Driven by the success of deep learning, especially deep convolution neural networks (CNNs), some researches have proposed to estimate height information from a single aerial image.
In this paper, we proposed to divide height values into spacing-increasing intervals and transform the regression problem into an ordinal regression problem.
arXiv Detail & Related papers (2020-06-04T12:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.