Camera Calibration without Camera Access -- A Robust Validation
Technique for Extended PnP Methods
- URL: http://arxiv.org/abs/2302.06949v1
- Date: Tue, 14 Feb 2023 10:09:34 GMT
- Title: Camera Calibration without Camera Access -- A Robust Validation
Technique for Extended PnP Methods
- Authors: Emil Brissman and Per-Erik Forss\'en and Johan Edstedt
- Abstract summary: We propose a method to find the model from 2D-3D correspondences.
We demonstrate the effectiveness of our proposed validation in experiments on synthetic data, 2D detection and Lidar measurements.
- Score: 0.6445605125467573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A challenge in image based metrology and forensics is intrinsic camera
calibration when the used camera is unavailable. The unavailability raises two
questions. The first question is how to find the projection model that
describes the camera, and the second is to detect incorrect models. In this
work, we use off-the-shelf extended PnP-methods to find the model from 2D-3D
correspondences, and propose a method for model validation. The most common
strategy for evaluating a projection model is comparing different models'
residual variances - however, this naive strategy cannot distinguish whether
the projection model is potentially underfitted or overfitted. To this end, we
model the residual errors for each correspondence, individually scale all
residuals using a predicted variance and test if the new residuals are drawn
from a standard normal distribution. We demonstrate the effectiveness of our
proposed validation in experiments on synthetic data, simulating 2D detection
and Lidar measurements. Additionally, we provide experiments using data from an
actual scene and compare non-camera access and camera access calibrations.
Last, we use our method to validate annotations in MegaDepth.
Related papers
- Importance of Disjoint Sampling in Conventional and Transformer Models for Hyperspectral Image Classification [2.1223532600703385]
This paper presents an innovative disjoint sampling approach for training SOTA models on Hyperspectral image classification (HSIC) tasks.
By separating training, validation, and test data without overlap, the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.
This rigorous methodology is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.
arXiv Detail & Related papers (2024-04-23T11:40:52Z) - Probabilistic Triangulation for Uncalibrated Multi-View 3D Human Pose
Estimation [22.127170452402332]
This paper presents a novel Probabilistic Triangulation module that can be embedded in a calibrated 3D human pose estimation method.
Our method achieves a trade-off between estimation accuracy and generalizability.
arXiv Detail & Related papers (2023-09-09T11:03:37Z) - Learning Markerless Robot-Depth Camera Calibration and End-Effector Pose
Estimation [0.0]
We present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data.
We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data.
Our robustness with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter evaluations and sub-deciradian average calibration and pose estimation errors.
arXiv Detail & Related papers (2022-12-15T00:53:42Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - Predicting Out-of-Distribution Error with the Projection Norm [87.61489137914693]
Projection Norm predicts a model's performance on out-of-distribution data without access to ground truth labels.
We find that Projection Norm is the only approach that achieves non-trivial detection performance on adversarial examples.
arXiv Detail & Related papers (2022-02-11T18:58:21Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Camera Distortion-aware 3D Human Pose Estimation in Video with
Optimization-based Meta-Learning [23.200130129530653]
Existing 3D human pose estimation algorithms trained on distortion-free datasets suffer performance drop when applied to new scenarios with a specific camera distortion.
We propose a simple yet effective model for 3D human pose estimation in video that can quickly adapt to any distortion environment.
arXiv Detail & Related papers (2021-11-30T01:35:04Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.