Improved $\alpha$-GAN architecture for generating 3D connected volumes
with an application to radiosurgery treatment planning
- URL: http://arxiv.org/abs/2207.11223v1
- Date: Wed, 13 Jul 2022 16:39:47 GMT
- Title: Improved $\alpha$-GAN architecture for generating 3D connected volumes
with an application to radiosurgery treatment planning
- Authors: Sanaz Mohammadjafari, Mucahit Cevik, Ayse Basar
- Abstract summary: We propose an improved version of 3D $alpha$-GAN for generating connected 3D volumes.
Our model can successfully generate high-quality 3D tumor volumes and associated treatment specifications.
The capability of improved 3D $alpha$-GAN makes it a valuable source for generating synthetic medical image data.
- Score: 0.5156484100374059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have gained significant attention in
several computer vision tasks for generating high-quality synthetic data.
Various medical applications including diagnostic imaging and radiation therapy
can benefit greatly from synthetic data generation due to data scarcity in the
domain. However, medical image data is typically kept in 3D space, and
generative models suffer from the curse of dimensionality issues in generating
such synthetic data. In this paper, we investigate the potential of GANs for
generating connected 3D volumes. We propose an improved version of 3D
$\alpha$-GAN by incorporating various architectural enhancements. On a
synthetic dataset of connected 3D spheres and ellipsoids, our model can
generate fully connected 3D shapes with similar geometrical characteristics to
that of training data. We also show that our 3D GAN model can successfully
generate high-quality 3D tumor volumes and associated treatment specifications
(e.g., isocenter locations). Similar moment invariants to the training data as
well as fully connected 3D shapes confirm that improved 3D $\alpha$-GAN
implicitly learns the training data distribution, and generates
realistic-looking samples. The capability of improved 3D $\alpha$-GAN makes it
a valuable source for generating synthetic medical image data that can help
future research in this domain.
Related papers
- E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model [23.56751925900571]
The development of 3D medical vision-language models holds significant potential for disease diagnosis and patient treatment.
We utilize self-supervised learning to construct a 3D visual foundation model for extracting 3D visual features.
We apply 3D spatial convolutions to aggregate and project high-level image features, reducing computational complexity.
Our model demonstrates superior performance compared to existing methods in report generation, visual question answering, and disease diagnosis.
arXiv Detail & Related papers (2024-10-18T06:31:40Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - 3D GANs and Latent Space: A comprehensive survey [0.0]
3D GANs are a new type of generative model used for 3D reconstruction, point cloud reconstruction, and 3D semantic scene completion.
The choice of distribution for noise is critical as it represents the latent space.
In this work, we explore the latent space and 3D GANs, examine several GAN variants and training methods to gain insights into improving 3D GAN training, and suggest potential future directions for further research.
arXiv Detail & Related papers (2023-04-08T06:36:07Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using
Pixel-aligned Reconstruction Priors [56.192682114114724]
Get3DHuman is a novel 3D human framework that can significantly boost the realism and diversity of the generated outcomes.
Our key observation is that the 3D generator can profit from human-related priors learned through 2D human generators and 3D reconstructors.
arXiv Detail & Related papers (2023-02-02T15:37:46Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.