A learning-based view extrapolation method for axial super-resolution
- URL: http://arxiv.org/abs/2103.06510v1
- Date: Thu, 11 Mar 2021 07:22:13 GMT
- Title: A learning-based view extrapolation method for axial super-resolution
- Authors: Zhaolin Xiao, Jinglei Shi, Xiaoran Jiang, Christine Guillemot
- Abstract summary: Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
- Score: 52.748944517480155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Axial light field resolution refers to the ability to distinguish features at
different depths by refocusing. The axial refocusing precision corresponds to
the minimum distance in the axial direction between two distinguishable
refocusing planes. High refocusing precision can be essential for some light
field applications like microscopy. In this paper, we propose a learning-based
method to extrapolate novel views from axial volumes of sheared epipolar plane
images (EPIs). As extended numerical aperture (NA) in classical imaging, the
extrapolated light field gives re-focused images with a shallower depth of
field (DOF), leading to more accurate refocusing results. Most importantly, the
proposed approach does not need accurate depth estimation. Experimental results
with both synthetic and real light fields show that the method not only works
well for light fields with small baselines as those captured by plenoptic
cameras (especially for the plenoptic 1.0 cameras), but also applies to light
fields with larger baselines.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Blur aware metric depth estimation with multi-focus plenoptic cameras [8.508198765617196]
We present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera.
The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used.
arXiv Detail & Related papers (2023-08-08T13:38:50Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy [6.09170287691728]
We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
arXiv Detail & Related papers (2020-10-08T14:20:16Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.