Virtual Inverse Perspective Mapping for Simultaneous Pose and Motion
Estimation
- URL: http://arxiv.org/abs/2303.05192v1
- Date: Thu, 9 Mar 2023 11:45:00 GMT
- Title: Virtual Inverse Perspective Mapping for Simultaneous Pose and Motion
Estimation
- Authors: Masahiro Hirano, Taku Senoo, Norimasa Kishi, Masatoshi Ishikawa
- Abstract summary: We propose an automatic method for pose and motion estimation against a ground surface for a ground-moving robot-mounted monocular camera.
The framework adopts a semi-dense approach that benefits from both a feature-based method and an image-registration-based method.
- Score: 5.199765487172328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an automatic method for pose and motion estimation against a
ground surface for a ground-moving robot-mounted monocular camera. The
framework adopts a semi-dense approach that benefits from both a feature-based
method and an image-registration-based method by setting multiple patches in
the image for displacement computation through a highly accurate
image-registration technique. To improve accuracy, we introduce virtual inverse
perspective mapping (IPM) in the refinement step to eliminate the perspective
effect on image registration. The pose and motion are jointly and robustly
estimated by a formulation of geometric bundle adjustment via virtual IPM.
Unlike conventional visual odometry methods, the proposed method is free from
cumulative error because it directly estimates pose and motion against the
ground by taking advantage of a camera configuration mounted on a ground-moving
robot where the camera's vertical motion is ignorable compared to its height
within the frame interval and the nearby ground surface is approximately flat.
We conducted experiments in which the relative mean error of the pitch and roll
angles was approximately 1.0 degrees and the absolute mean error of the travel
distance was 0.3 mm, even under camera shaking within a short period.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - MELON: NeRF with Unposed Images in SO(3) [35.093700416540436]
We show that a neural network can reconstruct a neural radiance field from unposed images with state-of-the-art accuracy while requiring ten times fewer views than adversarial approaches.
Using a neural-network to regularize pose estimation, we demonstrate that our method can reconstruct a neural radiance field from unposed images with state-of-the-art accuracy while requiring ten times fewer views than adversarial approaches.
arXiv Detail & Related papers (2023-03-14T17:33:39Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - MBA-VO: Motion Blur Aware Visual Odometry [99.56896875807635]
Motion blur is one of the major challenges remaining for visual odometry methods.
In low-light conditions where longer exposure times are necessary, motion blur can appear even for relatively slow camera motions.
We present a novel hybrid visual odometry pipeline with direct approach that explicitly models and estimates the camera's local trajectory within the exposure time.
arXiv Detail & Related papers (2021-03-25T09:02:56Z) - Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis
Decomposition [1.854931308524932]
We propose a general, non-parametric model for dense non-uniform motion blur estimation.
We show that our method overcomes the limitations of existing non-uniform motion blur estimation.
arXiv Detail & Related papers (2021-02-01T18:02:31Z) - Dense Pixel-wise Micro-motion Estimation of Object Surface by using Low
Dimensional Embedding of Laser Speckle Pattern [4.713575447740915]
This paper proposes a method of estimating micro-motion of an object at each pixel that is too small to detect under a common setup of camera and illumination.
The approach is based on speckle pattern, which is produced by the mutual interference of laser light on object's surface and continuously changes its appearance according to the out-of-plane motion of the surface.
To compensate such micro- and large motion, the method estimates the motion parameters up to scale at each pixel by nonlinear embedding of the speckle pattern into low-dimensional space.
arXiv Detail & Related papers (2020-10-31T03:03:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.