SparsePose: Sparse-View Camera Pose Regression and Refinement
- URL: http://arxiv.org/abs/2211.16991v1
- Date: Tue, 29 Nov 2022 05:16:07 GMT
- Title: SparsePose: Sparse-View Camera Pose Regression and Refinement
- Authors: Samarth Sinha, Jason Y. Zhang, Andrea Tagliasacchi, Igor
Gilitschenski, David B. Lindell
- Abstract summary: We propose SparsePose for recovering accurate camera poses given a sparse set of wide-baseline images (fewer than 10)
The method learns to regress initial camera poses and then iteratively refine them after training on a large-scale dataset of objects.
We also demonstrate our pipeline for high-fidelity 3D reconstruction using only 5-9 images of an object.
- Score: 32.74890928398753
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Camera pose estimation is a key step in standard 3D reconstruction pipelines
that operate on a dense set of images of a single object or scene. However,
methods for pose estimation often fail when only a few images are available
because they rely on the ability to robustly identify and match visual features
between image pairs. While these methods can work robustly with dense camera
views, capturing a large set of images can be time-consuming or impractical. We
propose SparsePose for recovering accurate camera poses given a sparse set of
wide-baseline images (fewer than 10). The method learns to regress initial
camera poses and then iteratively refine them after training on a large-scale
dataset of objects (Co3D: Common Objects in 3D). SparsePose significantly
outperforms conventional and learning-based baselines in recovering accurate
camera rotations and translations. We also demonstrate our pipeline for
high-fidelity 3D reconstruction using only 5-9 images of an object.
Related papers
- FLARE: Feed-forward Geometry, Appearance and Camera Estimation from Uncalibrated Sparse Views [93.6881532277553]
We present FLARE, a feed-forward model designed to infer high-quality camera poses and 3D geometry from uncalibrated sparse-view images.
Our solution features a cascaded learning paradigm with camera pose serving as the critical bridge, recognizing its essential role in mapping 3D structures onto 2D image planes.
arXiv Detail & Related papers (2025-02-17T18:54:05Z) - ZeroGS: Training 3D Gaussian Splatting from Unposed Images [62.34149221132978]
We propose ZeroGS to train 3DGS from hundreds of unposed and unordered images.
Our method leverages a pretrained foundation model as the neural scene representation.
Our method recovers more accurate camera poses than state-of-the-art pose-free NeRF/3DGS methods.
arXiv Detail & Related papers (2024-11-24T11:20:48Z) - SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views [36.02533658048349]
We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for sparse-view images.
SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views.
It requires only about 20 seconds to produce a textured mesh and camera poses for the input views.
arXiv Detail & Related papers (2024-08-19T17:53:10Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Few-View Object Reconstruction with Unknown Categories and Camera Poses [80.0820650171476]
This work explores reconstructing general real-world objects from a few images without known camera poses or object categories.
The crux of our work is solving two fundamental 3D vision problems -- shape reconstruction and pose estimation.
Our method FORGE predicts 3D features from each view and leverages them in conjunction with the input images to establish cross-view correspondence.
arXiv Detail & Related papers (2022-12-08T18:59:02Z) - Fast and Lightweight Scene Regressor for Camera Relocalization [1.6708069984516967]
Estimating the camera pose directly with respect to pre-built 3D models can be prohibitively expensive for several applications.
This study proposes a simple scene regression method that requires only a multi-layer perceptron network for mapping scene coordinates.
The proposed approach uses sparse descriptors to regress the scene coordinates, instead of a dense RGB image.
arXiv Detail & Related papers (2022-12-04T14:41:20Z) - FvOR: Robust Joint Shape and Pose Optimization for Few-view Object
Reconstruction [37.81077373162092]
Reconstructing an accurate 3D object model from a few image observations remains a challenging problem in computer vision.
We present FvOR, a learning-based object reconstruction method that predicts accurate 3D models given a few images with noisy input poses.
arXiv Detail & Related papers (2022-05-16T15:39:27Z) - Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo [71.59494156155309]
Existing approaches for multi-view 3D pose estimation explicitly establish cross-view correspondences to group 2D pose detections from multiple camera views.
We present our multi-view 3D pose estimation approach based on plane sweep stereo to jointly address the cross-view fusion and 3D pose reconstruction in a single shot.
arXiv Detail & Related papers (2021-04-06T03:49:35Z) - Back to the Feature: Learning Robust Camera Localization from Pixels to
Pose [114.89389528198738]
We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model.
The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching.
arXiv Detail & Related papers (2021-03-16T17:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.