Refractive Geometry for Underwater Domes
- URL: http://arxiv.org/abs/2108.06575v1
- Date: Sat, 14 Aug 2021 16:19:11 GMT
- Title: Refractive Geometry for Underwater Domes
- Authors: Mengkun She, David Nakath, Yifan Song, Kevin K\"oser
- Abstract summary: We show how to compute the center of refraction without knowledge of exact air, glass or water properties.
We propose a pure underwater calibration procedure to estimate the decentering from multiple images.
This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications.
- Score: 3.24029503704305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Underwater cameras are typically placed behind glass windows to protect them
from the water. Spherical glass, a dome port, is well suited for high water
pressures at great depth, allows for a large field of view, and avoids
refraction if a pinhole camera is positioned exactly at the sphere's center.
Adjusting a real lens perfectly to the dome center is a challenging task, both
in terms of how to actually guide the centering process (e.g. visual servoing)
and how to measure the alignment quality, but also, how to mechanically perform
the alignment. Consequently, such systems are prone to being decentered by some
offset, leading to challenging refraction patterns at the sphere that
invalidate the pinhole camera model. We show that the overall camera system
becomes an axial camera, even for thick domes as used for deep sea exploration
and provide a non-iterative way to compute the center of refraction without
requiring knowledge of exact air, glass or water properties. We also analyze
the refractive geometry at the sphere, looking at effects such as forward- vs.
backward decentering, iso-refraction curves and obtain a 6th-degree polynomial
equation for forward projection of 3D points in thin domes. We then propose a
pure underwater calibration procedure to estimate the decentering from multiple
images. This estimate can either be used during adjustment to guide the
mechanical position of the lens, or can be considered in photogrammetric
underwater applications.
Related papers
- OceanSplat: Object-aware Gaussian Splatting with Trinocular View Consistency for Underwater Scene Reconstruction [4.325717217536016]
OceanSplat is a novel 3D Gaussian Splatting-based approach for representing 3D geometry in underwater scenes.<n>We show that OceanSplat substantially outperforms existing methods for both scene reconstruction and restoration in scattering media.
arXiv Detail & Related papers (2026-01-08T14:38:39Z) - PFDepth: Heterogeneous Pinhole-Fisheye Joint Depth Estimation via Distortion-aware Gaussian-Splatted Volumetric Fusion [61.6340987158734]
We present the first pinhole-fisheye framework for heterogeneous multi-view depth estimation, PFDepth.<n> PFDepth employs a unified architecture capable of processing arbitrary combinations of pinhole and fisheye cameras with varied intrinsics and extrinsics.<n>We show that PFDepth sets a state-of-the-art performance on KITTI-360 and RealHet datasets over current mainstream depth networks.
arXiv Detail & Related papers (2025-09-30T09:38:59Z) - DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting [52.52398576505268]
We introduce DoF-Gaussian, a controllable depth-of-field method for 3D-GS.
We develop a lens-based imaging model based on geometric optics principles to control DoF effects.
Our framework is customizable and supports various interactive applications.
arXiv Detail & Related papers (2025-03-02T05:57:57Z) - NeuroPump: Simultaneous Geometric and Color Rectification for Underwater Images [52.863935209616635]
Underwater image restoration aims to remove geometric and color distortions due to water refraction, absorption and scattering.
We propose NeuroPump, a self-supervised method to simultaneously optimize and rectify underwater geometry and color as if water were pumped out.
arXiv Detail & Related papers (2024-12-20T13:40:28Z) - Online Refractive Camera Model Calibration in Visual Inertial Odometry [13.462106704905132]
This paper presents a general refractive camera model and online co-estimation of odometry and the refractive index of unknown media.
The refractive index is estimated online as a state variable of a monocular visual-inertial odometry framework.
The method was verified on data collected using an underwater robot traversing inside a pool.
arXiv Detail & Related papers (2024-09-18T15:48:05Z) - RoFIR: Robust Fisheye Image Rectification Framework Impervious to Optical Center Deviation [88.54817424560056]
We propose a distortion vector map (DVM) that measures the degree and direction of local distortion.
By learning the DVM, the model can independently identify local distortions at each pixel without relying on global distortion patterns.
In the pre-training stage, it predicts the distortion vector map and perceives the local distortion features of each pixel.
In the fine-tuning stage, it predicts a pixel-wise flow map for deviated fisheye image rectification.
arXiv Detail & Related papers (2024-06-27T06:38:56Z) - A Calibration Tool for Refractive Underwater Vision [0.0]
We provide the first open source implementation of an underwater refractive camera calibration toolbox.
It allows end-to-end calibration of underwater vision systems, including camera, stereo and housing calibration.
arXiv Detail & Related papers (2024-05-28T10:05:10Z) - DFR: Depth from Rotation by Uncalibrated Image Rectification with
Latitudinal Motion Assumption [6.369764116066747]
We propose Depth-from-Rotation (DfR), a novel image rectification solution for uncalibrated rotating cameras.
Specifically, we model the motion of a rotating camera as the camera rotates on a sphere with fixed latitude.
We derive a 2-point analytical solver from directly computing the rectified transformations on the two images.
arXiv Detail & Related papers (2023-07-11T09:11:22Z) - Deep Rotation Correction without Angle Prior [57.76737888499145]
We propose a new and practical task, named Rotation Correction, to automatically correct the tilt with high content fidelity.
This task can be easily integrated into image editing applications, allowing users to correct the rotated images without any manual operations.
We leverage a neural network to predict the optical flows that can warp the tilted images to be perceptually horizontal.
arXiv Detail & Related papers (2022-07-07T02:46:27Z) - FisheyeEX: Polar Outpainting for Extending the FoV of Fisheye Lens [84.12722334460022]
Fisheye lens gains increasing applications in computational photography and assisted driving because of its wide field of view (FoV)
In this paper, we present a FisheyeEX method that extends the FoV of the fisheye lens by outpainting the invalid regions.
The results demonstrate that our approach significantly outperforms the state-of-the-art methods, gaining around 27% more content beyond the original fisheye image.
arXiv Detail & Related papers (2022-06-12T21:38:50Z) - Depth360: Monocular Depth Estimation using Learnable Axisymmetric Camera
Model for Spherical Camera Image [2.3859169601259342]
We propose a learnable axisymmetric camera model which accepts distorted spherical camera images with two fisheye camera images.
We trained our models with a photo-realistic simulator to generate ground truth depth images.
We demonstrate the efficacy of our method using the spherical camera images from the GO Stanford dataset and pinhole camera images from the KITTI dataset.
arXiv Detail & Related papers (2021-10-20T07:21:04Z) - Underwater 3D Reconstruction Using Light Fields [41.23269538226359]
We present an underwater 3D reconstruction solution using light field cameras.
We first develop a light field camera calibration algorithm that simultaneously estimates the camera parameters.
We then design a novel depth estimation algorithm for 3D reconstruction.
arXiv Detail & Related papers (2021-09-05T16:23:39Z) - Minimal Solutions for Panoramic Stitching Given Gravity Prior [53.047330182598124]
We propose new minimal solutions to panoramic image stitching of images taken by cameras with coinciding optical centers.
We consider four practical camera configurations, assuming unknown fixed or varying focal length with or without radial distortion.
The solvers are tested both on synthetic scenes and on more than 500k real image pairs from the Sun360 dataset and from scenes captured by us using two smartphones equipped with IMUs.
arXiv Detail & Related papers (2020-12-01T13:17:36Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.