High-Order Residual Network for Light Field Super-Resolution
- URL: http://arxiv.org/abs/2003.13094v1
- Date: Sun, 29 Mar 2020 18:06:05 GMT
- Title: High-Order Residual Network for Light Field Super-Resolution
- Authors: Nan Meng, Xiaofei Wu, Jianzhuang Liu, Edmund Y. Lam
- Abstract summary: Plenoptic cameras usually sacrifice the spatial resolution of their SAIss to acquire information from different viewpoints.
We propose a novel high-order residual network to learn the geometric features hierarchically from the light field for reconstruction.
Our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
- Score: 39.93400777363467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plenoptic cameras usually sacrifice the spatial resolution of their SAIs to
acquire geometry information from different viewpoints. Several methods have
been proposed to mitigate such spatio-angular trade-off, but seldom make use of
the structural properties of the light field (LF) data efficiently. In this
paper, we propose a novel high-order residual network to learn the geometric
features hierarchically from the LF for reconstruction. An important component
in the proposed network is the high-order residual block (HRB), which learns
the local geometric features by considering the information from all input
views. After fully obtaining the local features learned from each HRB, our
model extracts the representative geometric features for spatio-angular
upsampling through the global residual learning. Additionally, a refinement
network is followed to further enhance the spatial details by minimizing a
perceptual loss. Compared with previous work, our model is tailored to the rich
structure inherent in the LF, and therefore can reduce the artifacts near
non-Lambertian and occlusion regions. Experimental results show that our
approach enables high-quality reconstruction even in challenging regions and
outperforms state-of-the-art single image or LF reconstruction methods with
both quantitative measurements and visual evaluation.
Related papers
- NeuSurf: On-Surface Priors for Neural Surface Reconstruction from Sparse
Input Views [41.03837477483364]
We propose a novel sparse view reconstruction framework that leverages on-surface priors to achieve highly faithful surface reconstruction.
Specifically, we design several constraints on global geometry alignment and local geometry refinement for jointly optimizing coarse shapes and fine details.
The experimental results with DTU and BlendedMVS datasets in two prevalent sparse settings demonstrate significant improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T16:04:45Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - SST-ReversibleNet: Reversible-prior-based Spectral-Spatial Transformer
for Efficient Hyperspectral Image Reconstruction [15.233185887461826]
A novel framework called the reversible-prior-based method is proposed.
ReversibleNet significantly outperforms state-of-the-art methods on simulated and real HSI datasets.
arXiv Detail & Related papers (2023-05-06T14:01:02Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for
Multi-view Reconstruction [41.43563122590449]
We propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction.
Our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions.
arXiv Detail & Related papers (2022-05-31T14:52:07Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.