A direct time-of-flight image sensor with in-pixel surface detection and
dynamic vision
- URL: http://arxiv.org/abs/2209.11772v1
- Date: Fri, 23 Sep 2022 14:38:00 GMT
- Title: A direct time-of-flight image sensor with in-pixel surface detection and
dynamic vision
- Authors: Istvan Gyongy, Ahmet T. Erdogan, Neale A.W. Dutton, Germ\'an Mora
Mart\'in, Alistair Gorman, Hanning Mai, Francesco Mattioli Della Rocca,
Robert K. Henderson
- Abstract summary: 3D flash LIDAR is an alternative to the traditional scanning LIDAR systems, promising precise depth imaging in a compact form factor.
We present a 64x32 pixel (256x128 SPAD) dToF imager that overcomes these limitations by using pixels with embedded histogramming.
This reduces the size of output data frames considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for direct depth readings.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D flash LIDAR is an alternative to the traditional scanning LIDAR systems,
promising precise depth imaging in a compact form factor, and free of moving
parts, for applications such as self-driving cars, robotics and augmented
reality (AR). Typically implemented using single-photon, direct time-of-flight
(dToF) receivers in image sensor format, the operation of the devices can be
hindered by the large number of photon events needing to be processed and
compressed in outdoor scenarios, limiting frame rates and scalability to larger
arrays. We here present a 64x32 pixel (256x128 SPAD) dToF imager that overcomes
these limitations by using pixels with embedded histogramming, which lock onto
and track the return signal. This reduces the size of output data frames
considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for
direct depth readings. The sensor offers selective readout of pixels detecting
surfaces, or those sensing motion, leading to reduced power consumption and
off-chip processing requirements. We demonstrate the application of the sensor
in mid-range LIDAR.
Related papers
- Energy-Efficient & Real-Time Computer Vision with Intelligent Skipping via Reconfigurable CMOS Image Sensors [5.824962833043625]
Video-based computer vision applications typically suffer from high energy consumption due to reading and processing all pixels in a frame, regardless of their significance.
Previous works have attempted to reduce this energy by skipping input patches or pixels and using feedback from the end task to guide the skipping algorithm.
This paper presents a custom-designed CMOS image sensor (CIS) system that improves energy efficiency by selectively skipping uneventful regions or rows within a frame during the sensor's readout phase.
arXiv Detail & Related papers (2024-09-25T20:32:55Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Video super-resolution for single-photon LIDAR [0.0]
3D Time-of-Flight (ToF) image sensors are used widely in applications such as self-driving cars, Augmented Reality (AR) and robotics.
In this paper, we use synthetic depth sequences to train a 3D Convolutional Neural Network (CNN) for denoising and upscaling (x4) depth data.
With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
arXiv Detail & Related papers (2022-10-19T11:33:29Z) - Real-Time Optical Flow for Vehicular Perception with Low- and
High-Resolution Event Cameras [3.845877724862319]
Event cameras capture changes of illumination in the observed scene rather than accumulating light to create images.
We propose an optimized framework for computing optical flow in real-time with both low- and high-resolution event cameras.
We evaluate our approach on both low- and high-resolution driving sequences, and show that it often achieves better results than the current state of the art.
arXiv Detail & Related papers (2021-12-20T15:09:20Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - FOVEA: Foveated Image Magnification for Autonomous Navigation [53.69803081925454]
We propose an attentional approach that elastically magnifies certain regions while maintaining a small input canvas.
Our proposed method boosts the detection AP over standard Faster R-CNN, with and without finetuning.
On the autonomous driving datasets Argoverse-HD and BDD100K, we show our proposed method boosts the detection AP over standard Faster R-CNN, with and without finetuning.
arXiv Detail & Related papers (2021-08-27T03:07:55Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - High-speed object detection with a single-photon time-of-flight image
sensor [2.648554238948439]
We present results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64x32 spatial resolution.
The results are relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.
arXiv Detail & Related papers (2021-07-28T14:53:44Z) - Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and
Pixel Exposures for Compressive Imaging Systems [56.154190098338965]
We show that our proposed time multiplexed coded aperture (TMCA) can be optimized end-to-end.
TMCA induces better coded snapshots enabling superior reconstructions in two different applications: compressive light field imaging and hyperspectral imaging.
This codification outperforms the state-of-the-art compressive imaging systems by more than 4dB in those applications.
arXiv Detail & Related papers (2021-04-06T22:42:34Z) - Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel
Processor Array [22.789108850681146]
This paper presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a preset course of gates in a cluttered environment using a low-cost processor array sensor.
We demonstrate a small ground vehicle running through or avoiding multiple gates at high speed using minimal computational resources.
arXiv Detail & Related papers (2020-09-27T09:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.