Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution
using Dynamic Filters
- URL: http://arxiv.org/abs/2008.11449v1
- Date: Wed, 26 Aug 2020 09:05:07 GMT
- Title: Multi-Dimension Fusion Network for Light Field Spatial Super-Resolution
using Dynamic Filters
- Authors: Qingyan Sun, Shuo Zhang, Song Chang, Lixi Zhu and Youfang Lin
- Abstract summary: We introduce a novel learning-based framework to improve the spatial resolution of light fields.
Our reconstructed images also show sharp details and distinct lines in both sub-aperture images and epipolar plane images.
- Score: 23.82780431526054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field cameras have been proved to be powerful tools for 3D
reconstruction and virtual reality applications. However, the limited
resolution of light field images brings a lot of difficulties for further
information display and extraction. In this paper, we introduce a novel
learning-based framework to improve the spatial resolution of light fields.
First, features from different dimensions are parallelly extracted and fused
together in our multi-dimension fusion architecture. These features are then
used to generate dynamic filters, which extract subpixel information from
micro-lens images and also implicitly consider the disparity information.
Finally, more high-frequency details learned in the residual branch are added
to the upsampled images and the final super-resolved light fields are obtained.
Experimental results show that the proposed method uses fewer parameters but
achieves better performances than other state-of-the-art methods in various
kinds of datasets. Our reconstructed images also show sharp details and
distinct lines in both sub-aperture images and epipolar plane images.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - Deep 3D World Models for Multi-Image Super-Resolution Beyond Optical
Flow [27.31768206943397]
Multi-image super-resolution (MISR) allows to increase the spatial resolution of a low-resolution (LR) acquisition by combining multiple images.
Our proposed model, called EpiMISR, moves away from optical flow and explicitly uses the epipolar geometry of the acquisition process.
arXiv Detail & Related papers (2024-01-30T12:55:49Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - DiFT: Differentiable Differential Feature Transform for Multi-View
Stereo [16.47413993267985]
We learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.
These low-level features can be directly fed to any existing multi-view stereo technique for enhanced 3D reconstruction.
arXiv Detail & Related papers (2022-03-16T07:12:46Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Learning Efficient Photometric Feature Transform for Multi-view Stereo [37.26574529243778]
We learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features.
Our framework automatically adapts to and makes efficient use of the geometric information available in different forms of input data.
arXiv Detail & Related papers (2021-03-27T02:53:15Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.