Single-pixel 3D imaging based on fusion temporal data of single photon
detector and millimeter-wave radar
- URL: http://arxiv.org/abs/2312.12439v1
- Date: Fri, 20 Oct 2023 13:03:48 GMT
- Title: Single-pixel 3D imaging based on fusion temporal data of single photon
detector and millimeter-wave radar
- Authors: Tingqin Lai, Xiaolin Liang, Yi Zhu, Xinyi Wu, Lianye Liao, Xuelin
Yuan, Ping Su and Shihai Sun
- Abstract summary: This paper proposes a fusion-data-based 3D imaging method that utilizes a single-pixel single-photon detector and a millimeter-wave radar.
The 3D information can be reconstructed from the one-dimensional fusion temporal data by using Artificial Neural Network (ANN)
- Score: 18.68262179213498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been increased attention towards 3D imaging using
single-pixel single-photon detection (also known as temporal data) due to its
potential advantages in terms of cost and power efficiency. However, to
eliminate the symmetry blur in the reconstructed images, a fixed background is
required. This paper proposes a fusion-data-based 3D imaging method that
utilizes a single-pixel single-photon detector and a millimeter-wave radar to
capture temporal histograms of a scene from multiple perspectives.
Subsequently, the 3D information can be reconstructed from the one-dimensional
fusion temporal data by using Artificial Neural Network (ANN). Both the
simulation and experimental results demonstrate that our fusion method
effectively eliminates symmetry blur and improves the quality of the
reconstructed images.
Related papers
- Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar [8.464054039931245]
Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light.
conventional lidar systems do not output the raw, captured waveforms of backscattered light.
We develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel.
arXiv Detail & Related papers (2024-08-22T08:12:09Z) - Towards 3D Vision with Low-Cost Single-Photon Cameras [24.711165102559438]
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras.
Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras.
arXiv Detail & Related papers (2024-03-26T15:40:05Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - Bridging the View Disparity of Radar and Camera Features for Multi-modal
Fusion 3D Object Detection [6.959556180268547]
This paper focuses on how to utilize millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection.
A novel method which realizes the feature-level fusion under bird-eye view (BEV) for a better feature representation is proposed.
arXiv Detail & Related papers (2022-08-25T13:21:37Z) - Leveraging Monocular Disparity Estimation for Single-View Reconstruction [8.583436410810203]
We leverage advances in monocular depth estimation to obtain disparity maps.
We transform 2D normalized disparity maps into 3D point clouds by solving an optimization on the relevant camera parameters.
arXiv Detail & Related papers (2022-07-01T03:05:40Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.