3D-FUTURE: 3D Furniture shape with TextURE
- URL: http://arxiv.org/abs/2009.09633v1
- Date: Mon, 21 Sep 2020 06:26:39 GMT
- Title: 3D-FUTURE: 3D Furniture shape with TextURE
- Authors: Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve
Maybank, Dacheng Tao
- Abstract summary: 3D Furniture shape with TextURE (3D-FUTURE): a richly-annotated and large-scale repository of 3D furniture shapes in the household scenario.
At the time of this technical report, 3D-FUTURE contains 20,240 clean and realistic synthetic images of 5,000 different rooms.
There are 9,992 unique detailed 3D instances of furniture with high-resolution textures.
- Score: 100.62519619022679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 3D CAD shapes in current 3D benchmarks are mostly collected from online
model repositories. Thus, they typically have insufficient geometric details
and less informative textures, making them less attractive for comprehensive
and subtle research in areas such as high-quality 3D mesh and texture recovery.
This paper presents 3D Furniture shape with TextURE (3D-FUTURE): a
richly-annotated and large-scale repository of 3D furniture shapes in the
household scenario. At the time of this technical report, 3D-FUTURE contains
20,240 clean and realistic synthetic images of 5,000 different rooms. There are
9,992 unique detailed 3D instances of furniture with high-resolution textures.
Experienced designers developed the room scenes, and the 3D CAD shapes in the
scene are used for industrial production. Given the well-organized 3D-FUTURE,
we provide baseline experiments on several widely studied tasks, such as joint
2D instance segmentation and 3D object pose estimation, image-based 3D shape
retrieval, 3D object reconstruction from a single image, and texture recovery
for 3D shapes, to facilitate related future researches on our database.
Related papers
- Uni3D: Exploring Unified 3D Representation at Scale [66.26710717073372]
We present Uni3D, a 3D foundation model to explore the unified 3D representation at scale.
Uni3D uses a 2D ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features.
We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild.
arXiv Detail & Related papers (2023-10-10T16:49:21Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices [78.20154723650333]
High-quality 3D ground-truth shapes are critical for 3D object reconstruction evaluation.
We introduce a novel multi-view RGBD dataset captured using a mobile device.
We obtain precise 3D ground-truth shape without relying on high-end 3D scanners.
arXiv Detail & Related papers (2023-03-03T14:02:50Z) - OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic
Perception, Reconstruction and Generation [107.71752592196138]
We propose OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects.
It comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets.
Each 3D object is captured with both 2D and 3D sensors, providing textured meshes, point clouds, multiview rendered images, and multiple real-captured videos.
arXiv Detail & Related papers (2023-01-18T18:14:18Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.