NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and
Pose Annotations
- URL: http://arxiv.org/abs/2306.09109v2
- Date: Fri, 13 Oct 2023 16:12:32 GMT
- Title: NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and
Pose Annotations
- Authors: Varun Jampani, Kevis-Kokitsi Maninis, Andreas Engelhardt, Arjun
Karpur, Karen Truong, Kyle Sargent, Stefan Popov, Andr\'e Araujo, Ricardo
Martin-Brualla, Kaushal Patel, Daniel Vlasic, Vittorio Ferrari, Ameesh
Makadia, Ce Liu, Yuanzhen Li, Howard Zhou
- Abstract summary: NAVI is a new dataset of category-agnostic image collections with high-quality 3D scans and per-image 2D-3D alignments.
These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps.
- Score: 64.95582364215548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural reconstruction enable high-quality 3D object
reconstruction from casually captured image collections. Current techniques
mostly analyze their progress on relatively simple image collections where
Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera
poses. We note that SfM techniques tend to fail on in-the-wild image
collections such as image search results with varying backgrounds and
illuminations. To enable systematic research progress on 3D reconstruction from
casual image captures, we propose NAVI: a new dataset of category-agnostic
image collections of objects with high-quality 3D scans along with per-image
2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D
alignments allow us to extract accurate derivative annotations such as dense
pixel correspondences, depth and segmentation maps. We demonstrate the use of
NAVI image collections on different problem settings and show that NAVI enables
more thorough evaluations that were not possible with existing datasets. We
believe NAVI is beneficial for systematic research progress on 3D
reconstruction and correspondence estimation. Project page:
https://navidataset.github.io
Related papers
- Implicit-Zoo: A Large-Scale Dataset of Neural Implicit Functions for 2D Images and 3D Scenes [65.22070581594426]
"Implicit-Zoo" is a large-scale dataset requiring thousands of GPU training days to facilitate research and development in this field.
We showcase two immediate benefits as it enables to: (1) learn token locations for transformer models; (2) directly regress 3D cameras poses of 2D images with respect to NeRF models.
This in turn leads to an improved performance in all three task of image classification, semantic segmentation, and 3D pose regression, thereby unlocking new avenues for research.
arXiv Detail & Related papers (2024-06-25T10:20:44Z) - TP3M: Transformer-based Pseudo 3D Image Matching with Reference Image [0.9831489366502301]
We propose a Transformer-based pseudo 3D image matching method.
It upgrades the 2D features extracted from the source image to 3D features with the help of a reference image and matches to the 2D features extracted from the destination image.
Experimental results on multiple datasets show that the proposed method achieves the state-of-the-art on the tasks of homography estimation, pose estimation and visual localization.
arXiv Detail & Related papers (2024-05-14T08:56:09Z) - DUSt3R: Geometric 3D Vision Made Easy [8.471330244002564]
We introduce DUSt3R, a novel paradigm for Dense and Unconstrained Stereo 3D Reconstruction of arbitrary image collections.
We show that this formulation smoothly unifies the monocular and binocular reconstruction cases.
Our formulation directly provides a 3D model of the scene as well as depth information, but interestingly, we can seamlessly recover from it, pixel matches, relative and absolute camera.
arXiv Detail & Related papers (2023-12-21T18:52:14Z) - Visual Geometry Grounded Deep Structure From Motion [20.203320509695306]
We propose a new deep pipeline VGGSfM, where each component is fully differentiable and can be trained in an end-to-end manner.
First, we build on recent advances in deep 2D point tracking to extract reliable pixel-accurate tracks, which eliminates the need for chaining pairwise matches.
We attain state-of-the-art performance on three popular datasets, CO3D, IMC Phototourism, and ETH3D.
arXiv Detail & Related papers (2023-12-07T18:59:52Z) - Generating Visual Spatial Description via Holistic 3D Scene
Understanding [88.99773815159345]
Visual spatial description (VSD) aims to generate texts that describe the spatial relations of the given objects within images.
With an external 3D scene extractor, we obtain the 3D objects and scene features for input images.
We construct a target object-centered 3D spatial scene graph (Go3D-S2G), such that we model the spatial semantics of target objects within the holistic 3D scenes.
arXiv Detail & Related papers (2023-05-19T15:53:56Z) - CheckerPose: Progressive Dense Keypoint Localization for Object Pose
Estimation with Graph Neural Network [66.24726878647543]
Estimating the 6-DoF pose of a rigid object from a single RGB image is a crucial yet challenging task.
Recent studies have shown the great potential of dense correspondence-based solutions.
We propose a novel pose estimation algorithm named CheckerPose, which improves on three main aspects.
arXiv Detail & Related papers (2023-03-29T17:30:53Z) - Graph-DETR3D: Rethinking Overlapping Regions for Multi-View 3D Object
Detection [17.526914782562528]
We propose Graph-DETR3D to automatically aggregate multi-view imagery information through graph structure learning (GSL)
Our best model achieves 49.5 NDS on the nuScenes test leaderboard, achieving new state-of-the-art in comparison with various published image-view 3D object detectors.
arXiv Detail & Related papers (2022-04-25T12:10:34Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.