PlenoptiCam v1.0: A light-field imaging framework
- URL: http://arxiv.org/abs/2010.11687v5
- Date: Sun, 25 Jul 2021 17:38:22 GMT
- Title: PlenoptiCam v1.0: A light-field imaging framework
- Authors: Christopher Hahne and Amar Aggoun
- Abstract summary: Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
- Score: 8.467466998915018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light-field cameras play a vital role for rich 3-D information retrieval in
narrow range depth sensing applications. The key obstacle in composing
light-fields from exposures taken by a plenoptic camera is to computationally
calibrate, align and rearrange four-dimensional image data. Several attempts
have been proposed to enhance the overall image quality by tailoring pipelines
dedicated to particular plenoptic cameras and improving the consistency across
viewpoints at the expense of high computational loads. The framework presented
herein advances prior outcomes thanks to its novel micro image scale-space
analysis for generic camera calibration independent of the lens specifications
and its parallax-invariant, cost-effective viewpoint color equalization from
optimal transport theory. Artifacts from the sensor and micro lens grid are
compensated in an innovative way to enable superior quality in sub-aperture
image extraction, computational refocusing and Scheimpflug rendering with
sub-sampling capabilities. Benchmark comparisons using established image
metrics suggest that our proposed pipeline outperforms state-of-the-art tool
chains in the majority of cases. Results from a Wasserstein distance further
show that our color transfer outdoes the existing transport methods. Our
algorithms are released under an open-source license, offer cross-platform
compatibility with few dependencies and different user interfaces. This makes
the reproduction of results and experimentation with plenoptic camera
technology convenient for peer researchers, developers, photographers, data
scientists and others working in this field.
Related papers
- Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Fine Dense Alignment of Image Bursts through Camera Pose and Depth
Estimation [45.11207941777178]
This paper introduces a novel approach to the fine alignment of images in a burst captured by a handheld camera.
The proposed algorithm establishes dense correspondences by optimizing both the camera motion and surface depth and orientation at every pixel.
arXiv Detail & Related papers (2023-12-08T17:22:04Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Thin On-Sensor Nanophotonic Array Cameras [36.981384762023794]
We introduce emphflat nanophotonic computational cameras as an alternative to commodity cameras.
The optical array is embedded on a metasurface that, at 700nm height, is flat and sits on the sensor cover glass at 2.5mm focal distance from the sensor.
We reconstruct a megapixel image from our flat imager with a emphlearned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior.
arXiv Detail & Related papers (2023-08-05T06:04:07Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - GenISP: Neural ISP for Low-Light Machine Cognition [19.444297600977546]
In low-light conditions, object detectors using raw image data are more robust than detectors using image data processed by an ISP pipeline.
We propose a minimal neural ISP pipeline for machine cognition, named GenISP, that explicitly incorporates Color Space Transformation to a device-independent color space.
arXiv Detail & Related papers (2022-05-07T17:17:24Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution
using Dynamic Filters [23.82780431526054]
We introduce a novel learning-based framework to improve the spatial resolution of light fields.
Our reconstructed images also show sharp details and distinct lines in both sub-aperture images and epipolar plane images.
arXiv Detail & Related papers (2020-08-26T09:05:07Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.