Unrolled Primal-Dual Networks for Lensless Cameras
- URL: http://arxiv.org/abs/2203.04353v1
- Date: Tue, 8 Mar 2022 19:21:39 GMT
- Title: Unrolled Primal-Dual Networks for Lensless Cameras
- Authors: Oliver Kingshott, Nick Antipa, Emrah Bostan and Kaan Ak\c{s}it
- Abstract summary: We show that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature.
This improvement stems from our finding that embedding learnable forward and adjoint models in a learned primal-dual optimization framework can even improve the quality of reconstructed images.
- Score: 0.45880283710344055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional image reconstruction models for lensless cameras often assume
that each measurement results from convolving a given scene with a single
experimentally measured point-spread function. These image reconstruction
models fall short in simulating lensless cameras truthfully as these models are
not sophisticated enough to account for optical aberrations or scenes with
depth variations. Our work shows that learning a supervised primal-dual
reconstruction method results in image quality matching state of the art in the
literature without demanding a large network capacity. This improvement stems
from our primary finding that embedding learnable forward and adjoint models in
a learned primal-dual optimization framework can even improve the quality of
reconstructed images (+5dB PSNR) compared to works that do not correct for the
model error. In addition, we built a proof-of-concept lensless camera prototype
that uses a pseudo-random phase mask to demonstrate our point. Finally, we
share the extensive evaluation of our learned model based on an open dataset
and a dataset from our proof-of-concept lensless camera prototype.
Related papers
- GANESH: Generalizable NeRF for Lensless Imaging [12.985055542373791]
We introduce GANESH, a novel framework designed to enable simultaneous refinement and novel view synthesis from lensless images.
Unlike existing methods that require scene-specific training, our approach supports on-the-fly inference without retraining on each scene.
To facilitate research in this area, we also present the first multi-view lensless dataset, LenslessScenes.
arXiv Detail & Related papers (2024-11-07T15:47:07Z) - PhoCoLens: Photorealistic and Consistent Reconstruction in Lensless Imaging [19.506766336040247]
Lensless cameras offer significant advantages in size, weight, and cost compared to traditional lens-based systems.
Current algorithms struggle with inaccurate forward imaging models and insufficient priors to reconstruct high-quality images.
We introduce a novel two-stage approach for consistent and photorealistic lensless image reconstruction.
arXiv Detail & Related papers (2024-09-26T16:07:24Z) - DifuzCam: Replacing Camera Lens with a Mask and a Diffusion Model [31.43307762723943]
The flat lensless camera design reduces the camera size and weight significantly.
The image is recovered from the raw sensor measurements using a reconstruction algorithm.
We propose utilizing a pre-trained diffusion model with a control network and a learned separable transformation for reconstruction.
arXiv Detail & Related papers (2024-08-14T13:20:52Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Neural Lens Modeling [50.57409162437732]
NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
arXiv Detail & Related papers (2023-04-10T20:09:17Z) - VMRF: View Matching Neural Radiance Fields [57.93631771072756]
VMRF is an innovative view matching NeRF that enables effective NeRF training without requiring prior knowledge in camera poses or camera pose distributions.
VMRF introduces a view matching scheme, which exploits unbalanced optimal transport to produce a feature transport plan for mapping a rendered image with randomly camera pose to the corresponding real image.
With the feature transport plan as the guidance, a novel pose calibration technique is designed which rectifies the initially randomized camera poses by predicting relative pose between the pair of rendered and real images.
arXiv Detail & Related papers (2022-07-06T12:26:40Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Wide-angle Image Rectification: A Survey [86.36118799330802]
wide-angle images contain distortions that violate the assumptions underlying pinhole camera models.
Image rectification, which aims to correct these distortions, can solve these problems.
We present a detailed description and discussion of the camera models used in different approaches.
Next, we review both traditional geometry-based image rectification methods and deep learning-based methods.
arXiv Detail & Related papers (2020-10-30T17:28:40Z) - FlatNet: Towards Photorealistic Scene Reconstruction from Lensless
Measurements [31.353395064815892]
We propose a non-iterative deep learning based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions.
Our approach, called $textitFlatNet$, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras.
arXiv Detail & Related papers (2020-10-29T09:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.