On-line non-overlapping camera calibration net
- URL: http://arxiv.org/abs/2002.08005v1
- Date: Wed, 19 Feb 2020 04:59:11 GMT
- Title: On-line non-overlapping camera calibration net
- Authors: Zhao Fangda, Toru Tamaki, Takio Kurita, Bisser Raytchev, Kazufumi
Kaneda
- Abstract summary: We propose an on-line method of the inter-camera pose estimation.
Experiments with simulations and the KITTI dataset show the proposed method to be effective in simulation.
- Score: 2.4569090161971743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an easy-to-use non-overlapping camera calibration method. First,
successive images are fed to a PoseNet-based network to obtain ego-motion of
cameras between frames. Next, the pose between cameras are estimated. Instead
of using a batch method, we propose an on-line method of the inter-camera pose
estimation. Furthermore, we implement the entire procedure on a computation
graph. Experiments with simulations and the KITTI dataset show the proposed
method to be effective in simulation.
Related papers
- VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - Learning Markerless Robot-Depth Camera Calibration and End-Effector Pose
Estimation [0.0]
We present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data.
We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data.
Our robustness with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter evaluations and sub-deciradian average calibration and pose estimation errors.
arXiv Detail & Related papers (2022-12-15T00:53:42Z) - RelPose: Predicting Probabilistic Relative Rotation for Single Objects
in the Wild [73.1276968007689]
We describe a data-driven method for inferring the camera viewpoints given multiple images of an arbitrary object.
We show that our approach outperforms state-of-the-art SfM and SLAM methods given sparse images on both seen and unseen categories.
arXiv Detail & Related papers (2022-08-11T17:59:59Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - CTRL-C: Camera calibration TRansformer with Line-Classification [22.092637979495358]
We propose Camera calibration TRansformer with Line-Classification (CTRL-C), an end-to-end neural network-based approach to single image camera calibration.
Our experiments demonstrate that benchmark-C outperforms the previous state-of-the-art methods on the Google Street View and SUN360 datasets.
arXiv Detail & Related papers (2021-09-06T06:30:38Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Calibration Venus: An Interactive Camera Calibration Method Based on
Search Algorithm and Pose Decomposition [2.878441608970396]
The interactive calibration method based on the plane board is becoming popular in camera calibration field due to its repeatability and operation advantages.
The existing methods select suggestions from a fixed dataset of pre-defined poses based on subjective experience, which leads to a certain degree of one-sidedness.
arXiv Detail & Related papers (2020-09-13T12:12:10Z) - Unsupervised Learning of Camera Pose with Compositional Re-estimation [10.251550038802343]
Given an input video sequence, our goal is to estimate the camera pose (i.e. the camera motion) between consecutive frames.
We propose an alternative approach that utilizes a compositional re-estimation process for camera pose estimation.
Our approach significantly improves the predicted camera motion both quantitatively and visually.
arXiv Detail & Related papers (2020-01-17T18:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.