Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware
Depth-from-Defocus
- URL: http://arxiv.org/abs/2402.18175v1
- Date: Wed, 28 Feb 2024 09:07:26 GMT
- Title: Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware
Depth-from-Defocus
- Authors: Zhuofeng Wu, Yusuke Monno, and Masatoshi Okutomi
- Abstract summary: We propose a novel self-supervised learning method for aberration-aware depth-from-defocus (DfD)
In our PSF estimation, we assume rotationally symmetric PSFs and introduce the polar coordinate system.
We also handle the focus breathing phenomenon that occurs in real DfD situations.
- Score: 14.383129822833155
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we address the task of aberration-aware depth-from-defocus
(DfD), which takes account of spatially variant point spread functions (PSFs)
of a real camera. To effectively obtain the spatially variant PSFs of a real
camera without requiring any ground-truth PSFs, we propose a novel
self-supervised learning method that leverages the pair of real sharp and
blurred images, which can be easily captured by changing the aperture setting
of the camera. In our PSF estimation, we assume rotationally symmetric PSFs and
introduce the polar coordinate system to more accurately learn the PSF
estimation network. We also handle the focus breathing phenomenon that occurs
in real DfD situations. Experimental results on synthetic and real data
demonstrate the effectiveness of our method regarding both the PSF estimation
and the depth estimation.
Related papers
- FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object Pose Estimation [65.01601309903971]
We introduce FAFA, a Frequency-Aware Flow-Aided self-supervised framework for 6D pose estimation of unmanned underwater vehicles (UUVs)
Our framework relies solely on the 3D model and RGB images, alleviating the need for any real pose annotations or other-modality data like depths.
We evaluate the effectiveness of FAFA on common underwater object pose benchmarks and showcase significant performance improvements compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-09-25T03:54:01Z) - Towards Single-Lens Controllable Depth-of-Field Imaging via All-in-Focus Aberration Correction and Monocular Depth Estimation [19.312034704019634]
Controllable Depth-of-Field (DoF) imaging commonly produces amazing visual effects based on heavy and expensive high-end lenses.
This work centers around two major limitations of Minimalist Optical Systems (MOS), for achieving single-lens controllable DoF imaging via computational methods.
A Depth-aware Controllable DoF Imaging (DCDI) framework is proposed equipped with All-in-Focus (AiF) aberration correction and monocular depth estimation.
With the predicted depth map, recovered image, and depth-aware PSF map inferred by Omni-Lens-Field, single-lens controllable DoF imaging is
arXiv Detail & Related papers (2024-09-15T14:52:16Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Deep Depth from Focal Stack with Defocus Model for Camera-Setting
Invariance [19.460887007137607]
We propose a learning-based depth from focus/defocus (DFF) which takes a focal stack as input for estimating scene depth.
We show that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.
arXiv Detail & Related papers (2022-02-26T04:21:08Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Point Spread Function Estimation for Wide Field Small Aperture
Telescopes with Deep Neural Networks and Calibration Data [11.909250072362264]
The point spread function (PSF) reflects states of a telescope.
estimating PSF in any position of the whole field of view is hard, because aberrations induced by the optical system are quite complex.
We further develop our deep neural network (DNN) based PSF modelling method and show its applications in PSF estimation.
arXiv Detail & Related papers (2020-11-20T07:26:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.