Advances in 3D Generation: A Survey
- URL: http://arxiv.org/abs/2401.17807v1
- Date: Wed, 31 Jan 2024 13:06:48 GMT
- Title: Advances in 3D Generation: A Survey
- Authors: Xiaoyu Li, Qi Zhang, Di Kang, Weihao Cheng, Yiming Gao, Jingbo Zhang,
Zhihao Liang, Jing Liao, Yan-Pei Cao, Ying Shan
- Abstract summary: The field of 3D content generation is developing rapidly, enabling the creation of increasingly high-quality and diverse 3D models.
Specifically, we introduce the 3D representations that serve as the backbone for 3D generation.
We provide a comprehensive overview of the rapidly growing literature on generation methods, categorized by the type of algorithmic paradigms.
- Score: 54.95024616672868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating 3D models lies at the core of computer graphics and has been the
focus of decades of research. With the emergence of advanced neural
representations and generative models, the field of 3D content generation is
developing rapidly, enabling the creation of increasingly high-quality and
diverse 3D models. The rapid growth of this field makes it difficult to stay
abreast of all recent developments. In this survey, we aim to introduce the
fundamental methodologies of 3D generation methods and establish a structured
roadmap, encompassing 3D representation, generation methods, datasets, and
corresponding applications. Specifically, we introduce the 3D representations
that serve as the backbone for 3D generation. Furthermore, we provide a
comprehensive overview of the rapidly growing literature on generation methods,
categorized by the type of algorithmic paradigms, including feedforward
generation, optimization-based generation, procedural generation, and
generative novel view synthesis. Lastly, we discuss available datasets,
applications, and open challenges. We hope this survey will help readers
explore this exciting topic and foster further advancements in the field of 3D
content generation.
Related papers
- A Survey On Text-to-3D Contents Generation In The Wild [5.875257756382124]
3D content creation plays a vital role in various applications, such as gaming, robotics simulation, and virtual reality.
To address this challenge, text-to-3D generation technologies have emerged as a promising solution for automating 3D creation.
arXiv Detail & Related papers (2024-05-15T15:23:22Z) - Text-to-3D Shape Generation [18.76771062964711]
Computational systems that can perform text-to-3D shape generation have captivated the popular imagination.
We provide a survey of the underlying technology and methods enabling text-to-3D shape generation to summarize the background literature.
We then derive a systematic categorization of recent work on text-to-3D shape generation based on the type of supervision data required.
arXiv Detail & Related papers (2024-03-20T04:03:44Z) - A Comprehensive Survey on 3D Content Generation [148.434661725242]
3D content generation shows both academic and practical values.
New taxonomy is proposed that categorizes existing approaches into three types: 3D native generative methods, 2D prior-based 3D generative methods, and hybrid 3D generative methods.
arXiv Detail & Related papers (2024-02-02T06:20:44Z) - Progress and Prospects in 3D Generative AI: A Technical Overview
including 3D human [51.58094069317723]
This paper aims to provide a comprehensive overview and summary of the relevant papers published mostly during the latter half year of 2023.
It will begin by discussing the AI generated object models in 3D, followed by the generated 3D human models, and finally, the generated 3D human motions, culminating in a conclusive summary and a vision for the future.
arXiv Detail & Related papers (2024-01-05T03:41:38Z) - 3DGEN: A GAN-based approach for generating novel 3D models from image
data [5.767281919406463]
We present 3DGEN, a model that leverages the recent work on both Neural Radiance Fields for object reconstruction and GAN-based image generation.
We show that the proposed architecture can generate plausible meshes for objects of the same category as the training images and compare the resulting meshes with the state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-13T12:24:34Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - T2TD: Text-3D Generation Model based on Prior Knowledge Guidance [74.32278935880018]
We propose a novel text-3D generation model (T2TD), which introduces the related shapes or textual information as the prior knowledge to improve the performance of the 3D generation model.
Our approach significantly improves 3D model generation quality and outperforms the SOTA methods on the text2shape datasets.
arXiv Detail & Related papers (2023-05-25T06:05:52Z) - Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era [36.66506237523448]
Generative AI has made significant progress in recent years, with text-guided content generation being the most practical.
Thanks to advancements in text-to-image and 3D modeling technologies, like neural radiance field (NeRF), text-to-3D has emerged as a nascent yet highly active research field.
arXiv Detail & Related papers (2023-05-10T13:26:08Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.