Efficient computation of backprojection arrays for 3D light field
deconvolution
- URL: http://arxiv.org/abs/2003.09133v2
- Date: Mon, 10 May 2021 08:15:32 GMT
- Title: Efficient computation of backprojection arrays for 3D light field
deconvolution
- Authors: Martin Eberhart
- Abstract summary: Light field deconvolution allows three-dimensional investigations from a single snapshot recording of a plenoptic camera.
It is based on a linear image formation model, and iterative volume reconstruction requires to define the backprojection of individual image pixels into object space.
New algorithm is presented to determine H' from H, which is based on the distinct relation of the elements' positions within the two multi-dimensional arrays.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field deconvolution allows three-dimensional investigations from a
single snapshot recording of a plenoptic camera. It is based on a linear image
formation model, and iterative volume reconstruction requires to define the
backprojection of individual image pixels into object space. This is
effectively a reversal of the point spread function (PSF), and backprojection
arrays H' can be derived from the shift-variant PSFs H of the optical system,
which is a very time consuming step for high resolution cameras. This paper
illustrates the common structure of backprojection arrays and the significance
of their efficient computation. A new algorithm is presented to determine H'
from H, which is based on the distinct relation of the elements' positions
within the two multi-dimensional arrays. It permits a pure array
re-arrangement, and while results are identical to those from published codes,
computation times are drastically reduced. This is shown by benchmarking the
new method using various sample PSF arrays against existing algorithms. The
paper is complemented by practical hints for the experimental acquisition of
light field PSFs in a photographic setup.
Related papers
- Optimized Sampling for Non-Line-of-Sight Imaging Using Modified Fast Fourier Transforms [6.866110149269]
Non-line-of-Sight (NLOS) imaging systems collect light at a diffuse relay surface and input this measurement into computational algorithms that output a 3D reconstruction.
These algorithms utilize the Fast Fourier Transform (FFT) to accelerate the reconstruction process but require both input and output to be sampled spatially with uniform grids.
In this work, we demonstrate that existing NLOS imaging setups typically oversample the relay surface spatially, explaining why the measurement can be compressed without sacrificing reconstruction quality.
arXiv Detail & Related papers (2025-01-09T13:52:30Z) - Photoacoustic Iterative Optimization Algorithm with Shape Prior Regularization [18.99190657089862]
Photoacoustic imaging (PAI) suffers from inherent limitations that can degrade the quality of reconstructed results.
We propose a new optimization method for both 2D and 3D PAI reconstruction results, called the regularized iteration method with shape prior.
arXiv Detail & Related papers (2024-12-01T07:02:36Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Resolving Multiphoton Coincidences in Single-Photon Detector Arrays with
Row-Column Readouts [8.99464235494883]
Row-column multiplexing has proven to be an effective strategy in scaling single-photon detector arrays to kilopixel and megapixel spatial resolutions.
We propose a method to resolve up to 4-photon coincidences in single-photon detector arrays with row-column readouts.
arXiv Detail & Related papers (2023-12-05T18:58:43Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Variable Radiance Field for Real-World Category-Specific Reconstruction from Single Image [25.44715538841181]
Reconstructing category-specific objects using Neural Radiance Field (NeRF) from a single image is a promising yet challenging task.
We propose Variable Radiance Field (VRF), a novel framework capable of efficiently reconstructing category-specific objects.
VRF achieves state-of-the-art performance in both reconstruction quality and computational efficiency.
arXiv Detail & Related papers (2023-06-08T12:12:02Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Hybrid Mesh-neural Representation for 3D Transparent Object
Reconstruction [30.66452291775852]
We propose a novel method to reconstruct the 3D shapes of transparent objects using hand-held captured images under natural light conditions.
It combines the advantage of explicit mesh and multi-layer perceptron (MLP) network, a hybrid representation, to simplify the capture used in recent contributions.
arXiv Detail & Related papers (2022-03-23T17:58:56Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Efficient Computation of Higher Order 2D Image Moments using the
Discrete Radon Transform [0.0]
We extend an efficient algorithm based on the Discrete Radon Transform to generate moments greater than the 3rd order.
Results of scaling the algorithm based on image area and its computational comparison with a standard method demonstrate the efficacy of the approach.
arXiv Detail & Related papers (2020-09-04T15:26:03Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.