PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution
- URL: http://arxiv.org/abs/2507.09227v1
- Date: Sat, 12 Jul 2025 09:52:10 GMT
- Title: PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution
- Authors: Sanyam Jain, Bruna Neves de Freitas, Andreas Basse-OConnor, Alexandros Iosifidis, Ruben Pauwels,
- Abstract summary: We propose a combination of diffusion-based generation (PanoDiff) and Super-Resolution (SR) for generating synthetic dental panoramic radiographs (PRs)<n>The former generates a low-resolution (LR) seed of a PR which is then processed by the SR model to yield a high-resolution (HR) PR of size 1024 X 512.<n>For SR, we propose a state-of-the-art transformer that learns local-global relationships, resulting in sharper edges and textures.
- Score: 60.970656010712275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been increasing interest in the generation of high-quality, realistic synthetic medical images in recent years. Such synthetic datasets can mitigate the scarcity of public datasets for artificial intelligence research, and can also be used for educational purposes. In this paper, we propose a combination of diffusion-based generation (PanoDiff) and Super-Resolution (SR) for generating synthetic dental panoramic radiographs (PRs). The former generates a low-resolution (LR) seed of a PR (256 X 128) which is then processed by the SR model to yield a high-resolution (HR) PR of size 1024 X 512. For SR, we propose a state-of-the-art transformer that learns local-global relationships, resulting in sharper edges and textures. Experimental results demonstrate a Frechet inception distance score of 40.69 between 7243 real and synthetic images (in HR). Inception scores were 2.55, 2.30, 2.90 and 2.98 for real HR, synthetic HR, real LR and synthetic LR images, respectively. Among a diverse group of six clinical experts, all evaluating a mixture of 100 synthetic and 100 real PRs in a time-limited observation, the average accuracy in distinguishing real from synthetic images was 68.5% (with 50% corresponding to random guessing).
Related papers
- Proportional Sensitivity in Generative Adversarial Network (GAN)-Augmented Brain Tumor Classification Using Convolutional Neural Network [2.1669753476462015]
Generative Adversarial Networks (GAN) have shown potential in expanding limited medical imaging datasets.<n>This study explores how different ratios of GAN-generated and real brain tumor MRI images impact the performance of a CNN in classifying healthy vs. tumorous scans.
arXiv Detail & Related papers (2025-06-20T17:12:03Z) - Improving Heart Rejection Detection in XPCI Images Using Synthetic Data Augmentation [0.0]
StyleGAN was trained on available 3R biopsy patches and subsequently used to generate 10,000 realistic synthetic images.<n>These were combined with real 0R samples, that is samples without rejection, in various configurations to train ResNet-18 classifiers for binary rejection classification.<n>Results demonstrate that synthetic data improves classification performance, particularly when used in combination with real samples.
arXiv Detail & Related papers (2025-05-26T09:26:36Z) - CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI [58.35348718345307]
Current efforts to distinguish between real and AI-generated images may lack generalization.<n>We propose a novel framework, Co-Spy, that first enhances existing semantic features.<n>We also create Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22 state-of-the-art generative models.
arXiv Detail & Related papers (2025-03-24T01:59:29Z) - Improving text-conditioned latent diffusion for cancer pathology [0.5919433278490629]
generative models have allowed for hyperrealistic data synthesis.<n>One algorithm for synthesising a realistic image is diffusion; it iteratively converts an image to noise and learns the recovery process from this noise.<n>VAEs have allowed us to learn the representation of complex high-resolution images in a latent space.<n>The marriage of diffusion and VAEs allows us to carry out diffusion in the latent space of an autoencoder, enabling us to leverage the realistic generative capabilities of diffusion.
arXiv Detail & Related papers (2024-12-09T13:38:19Z) - Merging synthetic and real embryo data for advanced AI predictions [69.07284335967019]
We train two generative models using two datasets-one we created and made publicly available, and one existing public dataset-to generate synthetic embryo images at various cell stages.<n>These were combined with real images to train classification models for embryo cell stage prediction.<n>Our results demonstrate that incorporating synthetic images alongside real data improved classification performance, with the model achieving 97% accuracy compared to 94.5% when trained solely on real data.
arXiv Detail & Related papers (2024-12-02T08:24:49Z) - Bi-parametric prostate MR image synthesis using pathology and
sequence-conditioned stable diffusion [3.290987481767681]
We propose an image synthesis mechanism for multi-sequence prostate MR images conditioned on text.
We generate paired bi-parametric images conditioned on images conditioned on paired data.
We validate our method using 2D image slices from real suspected prostate cancer patients.
arXiv Detail & Related papers (2023-03-03T17:24:39Z) - Robust deep learning for eye fundus images: Bridging real and synthetic data for enhancing generalization [0.8599177028761124]
This work compares ten different GAN architectures to generate synthetic eye-fundus images with and without AMD.
StyleGAN2 reached the lowest Frechet Inception Distance (166.17), and clinicians could not accurately differentiate between real and synthetic images.
The accuracy rates were 82.8% for the test set and 81.3% for the STARE dataset, demonstrating the model's generalizability.
arXiv Detail & Related papers (2022-03-25T18:42:20Z) - Towards Ultrafast MRI via Extreme k-Space Undersampling and
Superresolution [65.25508348574974]
We go below the MRI acceleration factors reported by all published papers that reference the original fastMRI challenge.
We consider powerful deep learning based image enhancement methods to compensate for the underresolved images.
The quality of the reconstructed images surpasses that of the other methods, yielding an MSE of 0.00114, a PSNR of 29.6 dB, and an SSIM of 0.956 at x16 acceleration factor.
arXiv Detail & Related papers (2021-03-04T10:45:01Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.