Procedural 3D Terrain Generation using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2010.06411v1
- Date: Tue, 13 Oct 2020 14:15:10 GMT
- Title: Procedural 3D Terrain Generation using Generative Adversarial Networks
- Authors: Emmanouil Panagiotou and Eleni Charou
- Abstract summary: We use Generative Adversarial Networks (GAN) to yield realistic 3D environments based on the distribution of remotely sensed images of landscapes, captured by satellites or drones.
We are able to construct 3D scenery consisting of a plausible height distribution and colorization, in relation to the remotely sensed landscapes provided during training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural 3D Terrain generation has become a necessity in open world games,
as it can provide unlimited content, through a functionally infinite number of
different areas, for players to explore. In our approach, we use Generative
Adversarial Networks (GAN) to yield realistic 3D environments based on the
distribution of remotely sensed images of landscapes, captured by satellites or
drones. Our task consists of synthesizing a random but plausible RGB satellite
image and generating a corresponding Height Map in the form of a 3D point cloud
that will serve as an appropriate mesh of the landscape. For the first step, we
utilize a GAN trained with satellite images that manages to learn the
distribution of the dataset, creating novel satellite images. For the second
part, we need a one-to-one mapping from RGB images to Digital Elevation Models
(DEM). We deploy a Conditional Generative Adversarial network (CGAN), which is
the state-of-the-art approach to image-to-image translation, to generate a
plausible height map for every randomly generated image of the first model.
Combining the generated DEM and RGB image, we are able to construct 3D scenery
consisting of a plausible height distribution and colorization, in relation to
the remotely sensed landscapes provided during training.
Related papers
- GenRC: Generative 3D Room Completion from Sparse Image Collections [17.222652213723485]
GenRC is an automated training-free pipeline to complete a room-scale 3D mesh with high-fidelity textures.
E-Diffusion generates a view-consistent panoramic RGBD image which ensures global geometry and appearance consistency.
GenRC outperforms state-of-the-art methods under most appearance and geometric metrics on ScanNet and ARKitScenes datasets.
arXiv Detail & Related papers (2024-07-17T18:10:40Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with
Generative Diffusion Models [59.96172701917538]
Current Neural Radiance Fields (NeRF) can generate photorealistic novel views.
This paper proposes Inpaint4DNeRF to capitalize on state-of-the-art stable diffusion models.
arXiv Detail & Related papers (2023-12-30T11:26:55Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - LDM3D: Latent Diffusion Model for 3D [5.185393944663932]
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt.
We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences.
arXiv Detail & Related papers (2023-05-18T10:15:06Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.