Extreme Rotation Estimation using Dense Correlation Volumes
- URL: http://arxiv.org/abs/2104.13530v1
- Date: Wed, 28 Apr 2021 02:00:04 GMT
- Title: Extreme Rotation Estimation using Dense Correlation Volumes
- Authors: Ruojin Cai, Bharath Hariharan, Noah Snavely and Hadar Averbuch-Elor
- Abstract summary: We present a technique for estimating the relative 3D rotation of an RGB image pair in an extreme setting.
We observe that, even when images do not overlap, there may be rich hidden cues as to their geometric relationship.
We propose a network design that can automatically learn such implicit cues by comparing all pairs of points between the two input images.
- Score: 73.35119461422153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a technique for estimating the relative 3D rotation of an RGB
image pair in an extreme setting, where the images have little or no overlap.
We observe that, even when images do not overlap, there may be rich hidden cues
as to their geometric relationship, such as light source directions, vanishing
points, and symmetries present in the scene. We propose a network design that
can automatically learn such implicit cues by comparing all pairs of points
between the two input images. Our method therefore constructs dense feature
correlation volumes and processes these to predict relative 3D rotations. Our
predictions are formed over a fine-grained discretization of rotations,
bypassing difficulties associated with regressing 3D rotations. We demonstrate
our approach on a large variety of extreme RGB image pairs, including indoor
and outdoor images captured under different lighting conditions and geographic
locations. Our evaluation shows that our model can successfully estimate
relative rotations among non-overlapping images without compromising
performance over overlapping image pairs.
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Occ$^2$Net: Robust Image Matching Based on 3D Occupancy Estimation for
Occluded Regions [14.217367037250296]
Occ$2$Net is an image matching method that models occlusion relations using 3D occupancy and infers matching points in occluded regions.
We evaluate our method on both real-world and simulated datasets and demonstrate its superior performance over state-of-the-art methods on several metrics.
arXiv Detail & Related papers (2023-08-14T13:09:41Z) - RelPose++: Recovering 6D Poses from Sparse-view Observations [66.6922660401558]
We address the task of estimating 6D camera poses from sparse-view image sets (2-8 images)
We build on the recent RelPose framework which learns a network that infers distributions over relative rotations over image pairs.
Our final system results in large improvements in 6D pose prediction over prior art on both seen and unseen object categories.
arXiv Detail & Related papers (2023-05-08T17:59:58Z) - Estimating Extreme 3D Image Rotation with Transformer Cross-Attention [13.82735766201496]
We propose a cross-attention-based approach that utilizes CNN feature maps and a Transformer-Encoder to compute the cross-attention between the activation maps of the image pairs.
It is experimentally shown to outperform contemporary state-of-the-art schemes when applied to commonly used image rotation datasets and benchmarks.
arXiv Detail & Related papers (2023-03-05T09:07:26Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries
of 3D Shapes from Single-View RGB-D Images [26.38270361331076]
We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects.
We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images.
arXiv Detail & Related papers (2020-08-02T14:10:09Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.