iToF2dToF: A Robust and Flexible Representation for Data-Driven
Time-of-Flight Imaging
- URL: http://arxiv.org/abs/2103.07087v1
- Date: Fri, 12 Mar 2021 04:57:52 GMT
- Title: iToF2dToF: A Robust and Flexible Representation for Data-Driven
Time-of-Flight Imaging
- Authors: Felipe Gutierrez-Barragan, Huaijin Chen, Mohit Gupta, Andreas Velten,
Jinwei Gu
- Abstract summary: Indirect Time-of-Flight (iToF) cameras are a promising depth sensing technology.
They are prone to errors caused by multi-path interference (MPI) and low signal-to-noise ratio (SNR)
- Score: 26.17890136713725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Indirect Time-of-Flight (iToF) cameras are a promising depth sensing
technology. However, they are prone to errors caused by multi-path interference
(MPI) and low signal-to-noise ratio (SNR). Traditional methods, after
denoising, mitigate MPI by estimating a transient image that encodes depths.
Recently, data-driven methods that jointly denoise and mitigate MPI have become
state-of-the-art without using the intermediate transient representation. In
this paper, we propose to revisit the transient representation. Using
data-driven priors, we interpolate/extrapolate iToF frequencies and use them to
estimate the transient image. Given direct ToF (dToF) sensors capture transient
images, we name our method iToF2dToF. The transient representation is flexible.
It can be integrated with different rule-based depth sensing algorithms that
are robust to low SNR and can deal with ambiguous scenarios that arise in
practice (e.g., specular MPI, optical cross-talk). We demonstrate the benefits
of iToF2dToF over previous methods in real depth sensing scenarios.
Related papers
- Consistent Time-of-Flight Depth Denoising via Graph-Informed Geometric Attention [5.196236145367301]
We propose a novel ToF depth denoising network leveraging motion-invariant graph fusion.<n>Despite depth shifts across frames, graph structures exhibit temporal self-similarity, enabling cross-frame geometric attention for graph fusion.<n>The proposed scheme achieves state-of-the-art performance in terms of accuracy and consistency on synthetic DVToF dataset and exhibits robust generalization on the real Kinectv2 dataset.
arXiv Detail & Related papers (2025-06-30T06:29:24Z) - Learnable Burst-Encodable Time-of-Flight Imaging for High-Fidelity Long-Distance Depth Sensing [7.645012220983793]
Long-distance depth imaging holds great promise for applications such as autonomous driving and robotics.<n>Direct time-of-flight (dToF) imaging offers high-precision, long-distance depth sensing, yet demands ultra-short pulse light sources and high-resolution time-to-digital converters.<n>We introduce a novel ToF imaging paradigm, termed Burst-Encodable Time-of-Flight (BE-ToF), which facilitates high-fidelity, long-distance depth imaging.
arXiv Detail & Related papers (2025-05-28T06:46:43Z) - FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via Frequency-Decoupled Alignment and Degradation-Robust Fusion [63.87313550399871]
Image-event joint depth estimation methods leverage complementary modalities for robust perception, yet face challenges in generalizability.
We propose Self-supervised Transfer (PST) and FrequencyDe-coupled Fusion module (FreDF)
PST establishes cross-modal knowledge transfer through latent space alignment with image foundation models.
FreDF explicitly decouples high-frequency edge features from low-frequency structural components, resolving modality-specific frequency mismatches.
arXiv Detail & Related papers (2025-03-25T15:04:53Z) - Fried deconvolution [0.0]
We present a new approach to deblur the effect of atmospheric turbulence in the case of long range imaging.
Our method is based on an analytical formulation, the Fried kernel, of the atmosphere modulation transfer function (MTF) and a framelet based deconvolution algorithm.
arXiv Detail & Related papers (2024-11-05T08:04:43Z) - bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Misalignment-Robust Frequency Distribution Loss for Image Transformation [51.0462138717502]
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution.
We introduce a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain.
Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain.
arXiv Detail & Related papers (2024-02-28T09:27:41Z) - Microseismic source imaging using physics-informed neural networks with
hard constraints [4.07926531936425]
We propose a direct microseismic imaging framework based on physics-informed neural networks (PINNs)
We use the PINNs to represent a multi-frequency wavefield and then apply inverse Fourier transform to extract the source image.
We further apply our method to hydraulic fracturing monitoring field data, and demonstrate that our method can correctly image the source with fewer artifacts.
arXiv Detail & Related papers (2023-04-09T21:10:39Z) - Representing Noisy Image Without Denoising [91.73819173191076]
Fractional-order Moments in Radon space (FMR) is designed to derive robust representation directly from noisy images.
Unlike earlier integer-order methods, our work is a more generic design taking such classical methods as special cases.
arXiv Detail & Related papers (2023-01-18T10:13:29Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - Weakly-Supervised Optical Flow Estimation for Time-of-Flight [11.496094830445054]
We propose a training algorithm, which allows to supervise Optical Flow networks directly on the reconstructed depth.
We demonstrate that this approach enables the training of OF networks to align raw iToF measurements and compensate motion artifacts in the iToF depth images.
arXiv Detail & Related papers (2022-10-11T09:47:23Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Lightweight Deep Learning Architecture for MPI Correction and Transient
Reconstruction [19.040317739792787]
Indirect Time-of-Flight cameras (iToF) are low-cost devices that provide depth images at an interactive frame rate.
They are affected by different error sources, with the spotlight taken by Multi-Path Interference (MPI)
Common data-driven approaches tend to focus on a direct estimation of the output depth values, ignoring the underlying transient propagation of the light in the scene.
We propose a very compact architecture, leveraging on the direct-global subdivision of transient information for the removal of MPI and for the reconstruction of the transient information itself.
arXiv Detail & Related papers (2021-11-29T09:31:35Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.