Blind Time-of-Flight Imaging: Sparse Deconvolution on the Continuum with Unknown Kernels
- URL: http://arxiv.org/abs/2411.00893v1
- Date: Thu, 31 Oct 2024 22:42:02 GMT
- Title: Blind Time-of-Flight Imaging: Sparse Deconvolution on the Continuum with Unknown Kernels
- Authors: Ruiming Guo, Ayush Bhandari,
- Abstract summary: computational Time-of-Flight (ToF) imaging has emerged as an exciting and a novel imaging modality.
This paper introduces a novel, blind ToF imaging technique that does not require kernel calibration and recovers sparse spikes on a continuum.
- Score: 8.59982222642104
- License:
- Abstract: In recent years, computational Time-of-Flight (ToF) imaging has emerged as an exciting and a novel imaging modality that offers new and powerful interpretations of natural scenes, with applications extending to 3D, light-in-flight, and non-line-of-sight imaging. Mathematically, ToF imaging relies on algorithmic super-resolution, as the back-scattered sparse light echoes lie on a finer time resolution than what digital devices can capture. Traditional methods necessitate knowledge of the emitted light pulses or kernels and employ sparse deconvolution to recover scenes. Unlike previous approaches, this paper introduces a novel, blind ToF imaging technique that does not require kernel calibration and recovers sparse spikes on a continuum, rather than a discrete grid. By studying the shared characteristics of various ToF modalities, we capitalize on the fact that most physical pulses approximately satisfy the Strang-Fix conditions from approximation theory. This leads to a new mathematical formulation for sparse super-resolution. Our recovery approach uses an optimization method that is pivoted on an alternating minimization strategy. We benchmark our blind ToF method against traditional kernel calibration methods, which serve as the baseline. Extensive hardware experiments across different ToF modalities demonstrate the algorithmic advantages, flexibility and empirical robustness of our approach. We show that our work facilitates super-resolution in scenarios where distinguishing between closely spaced objects is challenging, while maintaining performance comparable to known kernel situations. Examples of light-in-flight imaging and light-sweep videos highlight the practical benefits of our blind super-resolution method in enhancing the understanding of natural scenes.
Related papers
- XR-VIO: High-precision Visual Inertial Odometry with Fast Initialization for XR Applications [34.2082611110639]
This paper presents a novel approach to Visual Inertial Odometry (VIO) focusing on the initialization and feature matching modules.
Existing methods for gyroscopes often suffer from poor stability in visual Structure from Motion (SfM) or in solving a huge number of parameters simultaneously.
By tightly coupling measurements, we enhance the robustness and accuracy of visual SfM.
In terms of feature matching, we introduce a hybrid method that combines optical flow and descriptor-based matching.
arXiv Detail & Related papers (2025-02-03T12:17:51Z) - Multi-Frame Blind Manifold Deconvolution for Rotating Synthetic Aperture Imaging [4.19203497706834]
Rotating synthetic aperture (RSA) imaging system captures images of the target scene at different rotation angles by rotating a rectangular aperture.
Deblurring acquired RSA images plays a critical role in reconstructing a latent sharp image underlying the scene.
We propose a novel method to process RSA images using manifold fitting and penalisation in the content of blind convolution.
arXiv Detail & Related papers (2025-01-31T18:39:47Z) - DAE-Fuse: An Adaptive Discriminative Autoencoder for Multi-Modality Image Fusion [10.713089596405053]
We propose DAE-Fuse, a novel two-phase discriminative autoencoder framework that generates sharp and natural fused images.
We pioneer the extension of image fusion techniques from static images to the video domain.
DaE-Fuse achieves state-of-the-art performance on multiple benchmarks, with superior generalizability to tasks like medical image fusion.
arXiv Detail & Related papers (2024-09-16T08:37:09Z) - Self-Supervised Single-Image Deconvolution with Siamese Neural Networks [6.138671548064356]
Inverse problems in image reconstruction are fundamentally complicated by unknown noise properties.
Deep learning methods allow for flexible parametrization of the noise and learning its properties directly from the data.
We tackle this problem with Fast Fourier Transform convolutions that provide training speed-up in 3D deconvolution tasks.
arXiv Detail & Related papers (2023-08-18T09:51:11Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.