Relative Pose for Nonrigid Multi-Perspective Cameras: The Static Case
- URL: http://arxiv.org/abs/2401.09140v1
- Date: Wed, 17 Jan 2024 11:28:28 GMT
- Title: Relative Pose for Nonrigid Multi-Perspective Cameras: The Static Case
- Authors: Min Li, Jiaqi Yang and Laurent Kneip
- Abstract summary: Multi-perspective cameras with potentially non-overlapping fields of view have become an important exteroceptive sensing modality.
We present a concise analysis of the observability of all variables based on noise, outliers, and rig rigidity for two different algorithms.
- Score: 23.48118135478092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-perspective cameras with potentially non-overlapping fields of view
have become an important exteroceptive sensing modality in a number of
applications such as intelligent vehicles, drones, and mixed reality headsets.
In this work, we challenge one of the basic assumptions made in these
scenarios, which is that the multi-camera rig is rigid. More specifically, we
are considering the problem of estimating the relative pose between a static
non-rigid rig in different spatial orientations while taking into account the
effect of gravity onto the system. The deformable physical connections between
each camera and the body center are approximated by a simple cantilever model,
and inserted into the generalized epipolar constraint. Our results lead us to
the important insight that the latent parameters of the deformation model,
meaning the gravity vector in both views, become observable. We present a
concise analysis of the observability of all variables based on noise,
outliers, and rig rigidity for two different algorithms. The first one is a
vision-only alternative, while the second one makes use of additional gravity
measurements. To conclude, we demonstrate the ability to sense gravity in a
real-world example, and discuss practical implications.
Related papers
- GVDepth: Zero-Shot Monocular Depth Estimation for Ground Vehicles based on Probabilistic Cue Fusion [7.588468985212172]
Generalizing metric monocular depth estimation presents a significant challenge due to its ill-posed nature.
We propose a novel canonical representation that maintains consistency across varied camera setups.
We also propose a novel architecture that adaptively and probabilistically fuses depths estimated via object size and vertical image position cues.
arXiv Detail & Related papers (2024-12-08T22:04:34Z) - Extreme Two-View Geometry From Object Poses with Diffusion Models [21.16779160086591]
We harness the power of object priors to accurately determine two-view geometry in the face of extreme viewpoint changes.
In experiments, our method has demonstrated extraordinary robustness and resilience to large viewpoint changes.
arXiv Detail & Related papers (2024-02-05T08:18:47Z) - Strongly incoherent gravity [0.0]
A non-entangling version of an arbitrary two-body potential $V(r)$ arises from local measurements and feedback forces.
This produces a non-relativistic model of gravity with fundamental loss of unitarity.
As an alternative to testing entanglement properties, we show that the entire remaining parameter space can be tested by looking for loss of quantum coherence in small systems.
arXiv Detail & Related papers (2023-01-20T01:09:12Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Suspected Object Matters: Rethinking Model's Prediction for One-stage
Visual Grounding [93.82542533426766]
We propose a Suspected Object Transformation mechanism (SOT) to encourage the target object selection among the suspected ones.
SOT can be seamlessly integrated into existing CNN and Transformer-based one-stage visual grounders.
Extensive experiments demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2022-03-10T06:41:07Z) - Efficient Globally-Optimal Correspondence-Less Visual Odometry for
Planar Ground Vehicles [23.910735789004075]
We introduce the first globally-optimal, correspondence-less solution to plane-based Ackermann motion estimation.
We prove its property of global optimality and analyse the impact of assuming a locally constant centre of rotation.
arXiv Detail & Related papers (2022-03-01T08:49:21Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Wide-Depth-Range 6D Object Pose Estimation in Space [124.94794113264194]
6D pose estimation in space poses unique challenges that are not commonly encountered in the terrestrial setting.
One of the most striking differences is the lack of atmospheric scattering, allowing objects to be visible from a great distance.
We propose a single-stage hierarchical end-to-end trainable network that is more robust to scale variations.
arXiv Detail & Related papers (2021-04-01T08:39:26Z) - Robust On-Manifold Optimization for Uncooperative Space Relative
Navigation with a Single Camera [4.129225533930966]
An innovative model-based approach is demonstrated to estimate the six-dimensional pose of a target object relative to the chaser spacecraft using solely a monocular setup.
It is validated on realistic synthetic and laboratory datasets of a rendezvous trajectory with the complex spacecraft Envisat.
arXiv Detail & Related papers (2020-05-14T16:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.