One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape
Optimization
- URL: http://arxiv.org/abs/2306.16928v1
- Date: Thu, 29 Jun 2023 13:28:16 GMT
- Title: One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape
Optimization
- Authors: Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang
Xu, Hao Su
- Abstract summary: Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.
We propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass.
- Score: 30.951405623906258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single image 3D reconstruction is an important but challenging task that
requires extensive knowledge of our natural world. Many existing methods solve
this problem by optimizing a neural radiance field under the guidance of 2D
diffusion models but suffer from lengthy optimization time, 3D inconsistency
results, and poor geometry. In this work, we propose a novel method that takes
a single image of any object as input and generates a full 360-degree 3D
textured mesh in a single feed-forward pass. Given a single image, we first use
a view-conditioned 2D diffusion model, Zero123, to generate multi-view images
for the input view, and then aim to lift them up to 3D space. Since traditional
reconstruction methods struggle with inconsistent multi-view predictions, we
build our 3D reconstruction module upon an SDF-based generalizable neural
surface reconstruction method and propose several critical training strategies
to enable the reconstruction of 360-degree meshes. Without costly
optimizations, our method reconstructs 3D shapes in significantly less time
than existing methods. Moreover, our method favors better geometry, generates
more 3D consistent results, and adheres more closely to the input image. We
evaluate our approach on both synthetic data and in-the-wild images and
demonstrate its superiority in terms of both mesh quality and runtime. In
addition, our approach can seamlessly support the text-to-3D task by
integrating with off-the-shelf text-to-image diffusion models.
Related papers
- GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image [28.759158325097093]
Unique3D is a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images.
Our framework features state-of-the-art generation fidelity and strong generalizability.
arXiv Detail & Related papers (2024-05-30T17:59:54Z) - LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models [65.22994156658918]
We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
arXiv Detail & Related papers (2024-03-04T07:57:05Z) - 2L3: Lifting Imperfect Generated 2D Images into Accurate 3D [16.66666619143761]
Multi-view (MV) 3D reconstruction is a promising solution to fuse generated MV images into consistent 3D objects.
However, the generated images usually suffer from inconsistent lighting, misaligned geometry, and sparse views, leading to poor reconstruction quality.
We present a novel 3D reconstruction framework that leverages intrinsic decomposition guidance, transient-mono prior guidance, and view augmentation to cope with the three issues.
arXiv Detail & Related papers (2024-01-29T02:30:31Z) - Instant3D: Fast Text-to-3D with Sparse-View Generation and Large
Reconstruction Model [68.98311213582949]
We propose Instant3D, a novel method that generates high-quality and diverse 3D assets from text prompts in a feed-forward manner.
Our method can generate diverse 3D assets of high visual quality within 20 seconds, two orders of magnitude faster than previous optimization-based methods.
arXiv Detail & Related papers (2023-11-10T18:03:44Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.