Cephalogram Synthesis and Landmark Detection in Dental Cone-Beam CT
Systems
- URL: http://arxiv.org/abs/2009.04420v2
- Date: Sat, 20 Mar 2021 18:08:38 GMT
- Title: Cephalogram Synthesis and Landmark Detection in Dental Cone-Beam CT
Systems
- Authors: Yixing Huang, Fuxin Fan, Christopher Syben, Philipp Roser, Leonid
Mill, Andreas Maier
- Abstract summary: We propose a sigmoid-based intensity transform that uses the nonlinear optical property of X-ray films to increase image contrast of synthetic cephalograms.
For low dose purpose, the pixel-to-pixel generative adversarial network (pix2pixGAN) is proposed for 2D cephalogram synthesis directly from two CBCT projections.
For landmark detection in the synthetic cephalograms, an efficient automatic landmark detection method using the combination of LeNet-5 and ResNet50 is proposed.
- Score: 11.242436948609715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the lack of standardized 3D cephalometric analytic methodology, 2D
cephalograms synthesized from 3D cone-beam computed tomography (CBCT) volumes
are widely used for cephalometric analysis in dental CBCT systems. However,
compared with conventional X-ray film based cephalograms, such synthetic
cephalograms lack image contrast and resolution. In addition, the radiation
dose during the scan for 3D reconstruction causes potential health risks. In
this work, we propose a sigmoid-based intensity transform that uses the
nonlinear optical property of X-ray films to increase image contrast of
synthetic cephalograms. To improve image resolution, super resolution deep
learning techniques are investigated. For low dose purpose, the pixel-to-pixel
generative adversarial network (pix2pixGAN) is proposed for 2D cephalogram
synthesis directly from two CBCT projections. For landmark detection in the
synthetic cephalograms, an efficient automatic landmark detection method using
the combination of LeNet-5 and ResNet50 is proposed. Our experiments
demonstrate the efficacy of pix2pixGAN in 2D cephalogram synthesis, achieving
an average peak signal-to-noise ratio (PSNR) value of 33.8 with reference to
the cephalograms synthesized from 3D CBCT volumes. Pix2pixGAN also achieves the
best performance in super resolution, achieving an average PSNR value of 32.5
without the introduction of checkerboard or jagging artifacts. Our proposed
automatic landmark detection method achieves 86.7% successful detection rate in
the 2 mm clinical acceptable range on the ISBI Test1 data, which is comparable
to the state-of-the-art methods. The method trained on conventional
cephalograms can be directly applied to landmark detection in the synthetic
cephalograms, achieving 93.0% and 80.7% successful detection rate in 4 mm
precision range for synthetic cephalograms from 3D volumes and 2D projections
respectively.
Related papers
- 3D Nephrographic Image Synthesis in CT Urography with the Diffusion Model and Swin Transformer [3.8557197729550485]
The proposed approach effectively synthesizes high-quality 3D nephrographic phase images.
It can be used to reduce radiation dose in CTU by 33.3% without compromising image quality.
arXiv Detail & Related papers (2025-02-26T23:22:31Z) - A 3D Facial Reconstruction Evaluation Methodology: Comparing Smartphone Scans with Deep Learning Based Methods Using Geometry and Morphometry Criteria [60.865754842465684]
Three-dimensional (3D) facial shape analysis has gained interest due to its potential clinical applications.
High cost of advanced 3D facial acquisition systems limits their widespread use, driving the development of low-cost acquisition and reconstruction methods.
This study introduces a novel evaluation methodology that goes beyond traditional geometry-based benchmarks by integrating morphometric shape analysis techniques.
arXiv Detail & Related papers (2025-02-13T15:47:45Z) - High-Fidelity 3D Lung CT Synthesis in ARDS Swine Models Using Score-Based 3D Residual Diffusion Models [13.79974752491887]
Acute respiratory distress syndrome (ARDS) is a severe condition characterized by lung inflammation and respiratory failure, with a high mortality rate of approximately 40%.
Traditional imaging methods, such as chest X-rays, provide only two-dimensional views, limiting their effectiveness in fully assessing lung pathology.
This study synthesizes high-fidelity 3D lung CT from 2D generated X-ray images with associated physiological parameters using a score-based 3D residual diffusion model.
arXiv Detail & Related papers (2024-09-26T18:22:34Z) - Enhancing Angular Resolution via Directionality Encoding and Geometric Constraints in Brain Diffusion Tensor Imaging [70.66500060987312]
Diffusion-weighted imaging (DWI) is a type of Magnetic Resonance Imaging (MRI) technique sensitised to the diffusivity of water molecules.
This work proposes DirGeo-DTI, a deep learning-based method to estimate reliable DTI metrics even from a set of DWIs acquired with the minimum theoretical number (6) of gradient directions.
arXiv Detail & Related papers (2024-09-11T11:12:26Z) - OCTCube: A 3D foundation model for optical coherence tomography that improves cross-dataset, cross-disease, cross-device and cross-modality analysis [11.346324975034051]
OCTCube is a 3D foundation model pre-trained on 26,605 3D OCT volumes encompassing 1.62 million 2D OCT images.
It outperforms 2D models when predicting 8 retinal diseases in both inductive and cross-dataset settings.
It also shows superior performance on cross-device prediction and when predicting systemic diseases, such as diabetes and hypertension.
arXiv Detail & Related papers (2024-08-20T22:55:19Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Acute ischemic stroke lesion segmentation in non-contrast CT images
using 3D convolutional neural networks [0.0]
We propose an automatic algorithm aimed at volumetric segmentation of acute ischemic stroke lesion in non-contrast computed tomography brain 3D images.
Our deep-learning approach is based on the popular 3D U-Net convolutional neural network architecture.
arXiv Detail & Related papers (2023-01-17T10:39:39Z) - Automatic Diagnosis of Carotid Atherosclerosis Using a Portable Freehand
3D Ultrasound Imaging System [18.73291257371106]
A total of 127 3D carotid artery scans were acquired using a portable 3D US system.
A U-Net segmentation network was applied to extract the carotid artery on 2D transverse frame.
A novel 3D reconstruction algorithm using fast dot projection (FDP) method with position regularization was proposed to reconstruct the carotid artery volume.
arXiv Detail & Related papers (2023-01-08T17:35:36Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.