Single-shot ToF sensing with sub-mm precision using conventional CMOS
sensors
- URL: http://arxiv.org/abs/2212.00928v1
- Date: Fri, 2 Dec 2022 01:50:36 GMT
- Title: Single-shot ToF sensing with sub-mm precision using conventional CMOS
sensors
- Authors: Manuel Ballester, Heming Wang, Jiren Li, Oliver Cossairt, Florian
Willomitzer
- Abstract summary: We present a novel single-shot interferometric ToF camera targeted for precise 3D measurements of dynamic objects.
In contrast to conventional ToF cameras, our device uses only off-the-shelf CCD/CMOS detectors and works at their native chip resolution.
We present 3D measurements of small (cm-sized) objects with > 2 Mp point cloud resolution and up to sub-mm depth precision.
- Score: 7.114925332582435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel single-shot interferometric ToF camera targeted for
precise 3D measurements of dynamic objects. The camera concept is based on
Synthetic Wavelength Interferometry, a technique that allows retrieval of depth
maps of objects with optically rough surfaces at submillimeter depth precision.
In contrast to conventional ToF cameras, our device uses only off-the-shelf
CCD/CMOS detectors and works at their native chip resolution (as of today,
theoretically up to 20 Mp and beyond). Moreover, we can obtain a full 3D model
of the object in single-shot, meaning that no temporal sequence of exposures or
temporal illumination modulation (such as amplitude or frequency modulation) is
necessary, which makes our camera robust against object motion.
In this paper, we introduce the novel camera concept and show first
measurements that demonstrate the capabilities of our system. We present 3D
measurements of small (cm-sized) objects with > 2 Mp point cloud resolution
(the resolution of our used detector) and up to sub-mm depth precision. We also
report a "single-shot 3D video" acquisition and a first single-shot
"Non-Line-of-Sight" measurement. Our technique has great potential for
high-precision applications with dynamic object movement, e.g., in AR/VR,
industrial inspection, medical imaging, and imaging through scattering media
like fog or human tissue.
Related papers
- VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Towards 3D Vision with Low-Cost Single-Photon Cameras [24.711165102559438]
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras.
Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras.
arXiv Detail & Related papers (2024-03-26T15:40:05Z) - Virtually increasing the measurement frequency of LIDAR sensor utilizing
a single RGB camera [1.3706331473063877]
This research suggests using a mono camera to virtually enhance the frame rate of LIDARs.
We achieve state-of-the-art performance on large public datasets in terms of accuracy and similarity to real measurements.
arXiv Detail & Related papers (2023-02-10T11:43:35Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Multifocal Stereoscopic Projection Mapping [24.101349988126692]
Current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues.
We propose a multifocal approach to mitigate a vergence--accommodation conflict (VAC) in stereoscopic PM.
A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance.
arXiv Detail & Related papers (2021-10-08T06:13:10Z) - M3DSSD: Monocular 3D Single Stage Object Detector [82.25793227026443]
We propose a Monocular 3D Single Stage object Detector (M3DSSD) with feature alignment and asymmetric non-local attention.
The proposed M3DSSD achieves significantly better performance than the monocular 3D object detection methods on the KITTI dataset.
arXiv Detail & Related papers (2021-03-24T13:09:11Z) - CoMo: A novel co-moving 3D camera system [0.0]
CoMo is a co-moving camera system of two synchronized high speed cameras coupled with rotational stages.
We address the calibration of the external parameters measuring the position of the cameras and their three angles of yaw, pitch and roll in the system "home" configuration.
We evaluate the robustness and accuracy of the system by comparing reconstructed and measured 3D distances in what we call 3D tests, which show a relative error of the order of 1%.
arXiv Detail & Related papers (2021-01-26T13:29:13Z) - Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy [8.3011168382078]
Miniature fluorescence microscopes capture only 2D information, and modifications that enable 3D capabilities increase the size and weight.
Here, we achieve the 3D capability by replacing the tube lens of a conventional 2D Miniscope with an optimized multifocal phase mask at the objective's aperture stop.
We demonstrate a prototype that is 17 mm tall and weighs 2.5 grams, achieving 2.76 $mu$m lateral, and 15 $mu$m axial resolution across most of the 900x700x390 $mu m3$ volume at 40 volumes per second.
arXiv Detail & Related papers (2020-10-12T01:19:31Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Kinematic 3D Object Detection in Monocular Video [123.7119180923524]
We propose a novel method for monocular video-based 3D object detection which carefully leverages kinematic motion to improve precision of 3D localization.
We achieve state-of-the-art performance on monocular 3D object detection and the Bird's Eye View tasks within the KITTI self-driving dataset.
arXiv Detail & Related papers (2020-07-19T01:15:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.