Two-Photon Interference LiDAR Imaging
- URL: http://arxiv.org/abs/2206.09661v1
- Date: Mon, 20 Jun 2022 09:08:51 GMT
- Title: Two-Photon Interference LiDAR Imaging
- Authors: Robbie Murray and Ashley Lyons
- Abstract summary: We present a quantum interference inspired approach to LiDAR which achieves OCT depth resolutions without the need for high levels of stability.
We demonstrate depth imaging capabilities with an effective impulse response of 70 mum, thereby allowing ranging and multiple reflections to be discerned with much higher resolution than conventional LiDAR approaches.
This enhanced resolution opens up avenues for LiDAR in 3D facial recognition, and small feature detection/tracking as well as enhancing the capabilities of more complex time-of-flight methods such as imaging through obscurants and non-line-of-sight imaging.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical Coherence Tomography (OCT) is a key 3D imaging technology that
provides micron scale depth resolution for bio-imaging. This resolution
substantially surpasses what it typically achieved in Light Detection and
Ranging (LiDAR) which is often limited to the millimetre scale due to the
impulse response of the detection electronics. However, the lack of coherence
in LiDAR scenes, arising from mechanical motion for example, make OCT
practically infeasible. Here we present a quantum interference inspired
approach to LiDAR which achieves OCT depth resolutions without the need for
high levels of stability. We demonstrate depth imaging capabilities with an
effective impulse response of 70 {\mu}m, thereby allowing ranging and multiple
reflections to be discerned with much higher resolution than conventional LiDAR
approaches. This enhanced resolution opens up avenues for LiDAR in 3D facial
recognition, and small feature detection/tracking as well as enhancing the
capabilities of more complex time-of-flight methods such as imaging through
obscurants and non-line-of-sight imaging.
Related papers
- MAROON: A Framework for the Joint Characterization of Near-Field High-Resolution Radar and Optical Depth Imaging Techniques [4.816237933371206]
We take on the unique challenge of characterizing depth imagers from both, the optical and radio-frequency domain.
We provide a comprehensive evaluation of their depth measurements with respect to distinct object materials, geometries, and object-to-sensor distances.
All object measurements will be made public in form of a multimodal dataset, called MAROON.
arXiv Detail & Related papers (2024-11-01T11:53:10Z) - Cross-spectral Gated-RGB Stereo Depth Estimation [34.31592077757453]
Gated cameras flood-illuminate a scene and capture the time-gated impulse response of a scene.
We propose a novel stereo-depth estimation method that is capable of exploiting these multi-modal multi-view depth cues.
The proposed method achieves accurate depth at long ranges, outperforming the next best existing method by 39% for ranges of 100 to 220m in MAE on accumulated LiDAR ground-truth.
arXiv Detail & Related papers (2024-05-21T13:10:43Z) - Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - Passive superresolution imaging of incoherent objects [63.942632088208505]
Method consists of measuring the field's spatial mode components in the image plane in the overcomplete basis of Hermite-Gaussian modes and their superpositions.
Deep neural network is used to reconstruct the object from these measurements.
arXiv Detail & Related papers (2023-04-19T15:53:09Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - Low dosage 3D volume fluorescence microscopy imaging using compressive
sensing [0.0]
We present a compressive sensing (CS) based approach to fully reconstruct 3D volumes with the same signal-to-noise ratio (SNR) with less than half of the excitation dosage.
We demonstrate our technique by capturing a 3D volume of the RFP labeled neurons in the zebrafish embryo spinal cord with the axial sampling of 0.1um using a confocal microscope.
The developed CS-based methodology in this work can be easily applied to other deep imaging modalities such as two-photon and light-sheet microscopy, where reducing sample photo-toxicity is a critical challenge.
arXiv Detail & Related papers (2022-01-03T18:44:50Z) - SGM3D: Stereo Guided Monocular 3D Object Detection [62.11858392862551]
We propose a stereo-guided monocular 3D object detection network, termed SGM3D.
We exploit robust 3D features extracted from stereo images to enhance the features learned from the monocular image.
Our method can be integrated into many other monocular approaches to boost performance without introducing any extra computational cost.
arXiv Detail & Related papers (2021-12-03T13:57:14Z) - LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR [40.98198236276633]
Vision-based depth estimation is a key feature in autonomous systems.
In such a monocular setup, dense depth is obtained with either additional input from one or several expensive LiDARs.
In this paper, we propose a new alternative of densely estimating metric depth by combining a monocular camera with a light-weight LiDAR.
arXiv Detail & Related papers (2021-09-08T12:06:31Z) - Axial-to-lateral super-resolution for 3D fluorescence microscopy using
unsupervised deep learning [19.515134844947717]
We present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in fluorescence microscopy.
Our method greatly reduces the effort to put into practice as the training of a network requires as little as a single 3D image stack.
We demonstrate that the trained network not only enhances axial resolution beyond the diffraction limit, but also enhances suppressed visual details between the imaging planes and removes imaging artifacts.
arXiv Detail & Related papers (2021-04-19T16:31:12Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.