Automatic Calibration of a Multi-Camera System with Limited Overlapping Fields of View for 3D Surgical Scene Reconstruction
- URL: http://arxiv.org/abs/2501.16221v2
- Date: Tue, 28 Jan 2025 20:10:17 GMT
- Title: Automatic Calibration of a Multi-Camera System with Limited Overlapping Fields of View for 3D Surgical Scene Reconstruction
- Authors: Tim Flückiger, Jonas Hein, Valery Fischer, Philipp Fürnstahl, Lilian Calvet,
- Abstract summary: The purpose of this study is to develop an automated and accurate external camera calibration method for 3D surgical scene reconstruction (3D-SSR)
We contribute a novel, fast, and fully automatic calibration method based on the projection of multi-scale markers (MSMs) using a ceiling-mounted projector.
- Score: 0.7165255458140439
- License:
- Abstract: The purpose of this study is to develop an automated and accurate external camera calibration method for multi-camera systems used in 3D surgical scene reconstruction (3D-SSR), eliminating the need for operator intervention or specialized expertise. The method specifically addresses the problem of limited overlapping fields of view caused by significant variations in optical zoom levels and camera locations. We contribute a novel, fast, and fully automatic calibration method based on the projection of multi-scale markers (MSMs) using a ceiling-mounted projector. MSMs consist of 2D patterns projected at varying scales, ensuring accurate extraction of well distributed point correspondences across significantly different viewpoints and zoom levels. Validation is performed using both synthetic and real data captured in a mock-up OR, with comparisons to traditional manual marker-based methods as well as markerless calibration methods. The method achieves accuracy comparable to manual, operator-dependent calibration methods while exhibiting higher robustness under conditions of significant differences in zoom levels. Additionally, we show that state-of-the-art Structure-from-Motion (SfM) pipelines are ineffective in 3D-SSR settings, even when additional texture is projected onto the OR floor. The use of a ceiling-mounted entry-level projector proves to be an effective alternative to operator-dependent, traditional marker-based methods, paving the way for fully automated 3D-SSR.
Related papers
- Discovering an Image-Adaptive Coordinate System for Photography Processing [51.164345878060956]
We propose a novel algorithm, IAC, to learn an image-adaptive coordinate system in the RGB color space before performing curve operations.
This end-to-end trainable approach enables us to efficiently adjust images with a jointly learned image-adaptive coordinate system and curves.
arXiv Detail & Related papers (2025-01-11T06:20:07Z) - Neural Real-Time Recalibration for Infrared Multi-Camera Systems [2.249916681499244]
There are no learning-free or neural techniques for real-time recalibration of infrared multi-camera systems.
We propose a neural network-based method capable of dynamic real-time calibration.
arXiv Detail & Related papers (2024-10-18T14:37:37Z) - Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - A Minimal Set of Parameters Based Depth-Dependent Distortion Model and Its Calibration Method for Stereo Vision Systems [6.486261075483768]
We propose a minimal set of parameters based depth-dependent distortion model (MDM)
The MDM considers the radial and decentering distortions of the lens to improve the accuracy of stereo vision systems.
We present an easy and flexible calibration method for the MDM of stereo vision systems with a commonly used planar pattern.
arXiv Detail & Related papers (2024-04-30T03:58:19Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - W-HMR: Monocular Human Mesh Recovery in World Space with Weak-Supervised Calibration [57.37135310143126]
Previous methods for 3D motion recovery from monocular images often fall short due to reliance on camera coordinates.
We introduce W-HMR, a weak-supervised calibration method that predicts "reasonable" focal lengths based on body distortion information.
We also present the OrientCorrect module, which corrects body orientation for plausible reconstructions in world space.
arXiv Detail & Related papers (2023-11-29T09:02:07Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Stereo camera system calibration: the need of two sets of parameters [0.0]
The reconstruction of a scene via a stereo-camera system is a two-steps process.
At first images from different cameras are matched to identify the set of point-to-point correspondences that then will actually be reconstructed in the real world.
We propose to calibrate the system twice to estimate two different sets of parameters.
arXiv Detail & Related papers (2021-01-14T17:03:17Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.