Spatial-Angular Attention Network for Light Field Reconstruction
- URL: http://arxiv.org/abs/2007.02252v2
- Date: Thu, 14 Oct 2021 01:35:24 GMT
- Title: Spatial-Angular Attention Network for Light Field Reconstruction
- Authors: Gaochang Wu, Yingqian Wang, Yebin Liu, Lu Fang, Tianyou Chai
- Abstract summary: We propose a spatial-angular attention network to perceive correspondences in the light field non-locally.
Motivated by the non-local attention mechanism, a spatial-angular attention module is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field.
We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale.
- Score: 64.27343801968226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typical learning-based light field reconstruction methods demand in
constructing a large receptive field by deepening the network to capture
correspondences between input views. In this paper, we propose a
spatial-angular attention network to perceive correspondences in the light
field non-locally, and reconstruction high angular resolution light field in an
end-to-end manner. Motivated by the non-local attention mechanism, a
spatial-angular attention module specifically for the high-dimensional light
field data is introduced to compute the responses from all the positions in the
epipolar plane for each pixel in the light field, and generate an attention map
that captures correspondences along the angular dimension. We then propose a
multi-scale reconstruction structure to efficiently implement the non-local
attention in the low spatial scale, while also preserving the high frequency
components in the high spatial scales. Extensive experiments demonstrate the
superior performance of the proposed spatial-angular attention network for
reconstructing sparsely-sampled light fields with non-Lambertian effects.
Related papers
- Learning based Deep Disentangling Light Field Reconstruction and
Disparity Estimation Application [1.5603081929496316]
We propose a Deep Disentangling Mechanism, which inherits the principle of the light field disentangling mechanism and adds advanced network structure.
We design a light-field reconstruction network (i.e., DDASR) on the basis of the Deep Disentangling Mechanism, and achieve SOTA performance in the experiments.
arXiv Detail & Related papers (2023-11-14T12:48:17Z) - SAWU-Net: Spatial Attention Weighted Unmixing Network for Hyperspectral
Images [91.20864037082863]
We propose a spatial attention weighted unmixing network, dubbed as SAWU-Net, which learns a spatial attention network and a weighted unmixing network in an end-to-end manner.
In particular, we design a spatial attention module, which consists of a pixel attention block and a window attention block to efficiently model pixel-based spectral information and patch-based spatial information.
Experimental results on real and synthetic datasets demonstrate the better accuracy and superiority of SAWU-Net.
arXiv Detail & Related papers (2023-04-22T05:22:50Z) - Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering [57.775678643512435]
We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
arXiv Detail & Related papers (2022-06-20T12:25:34Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Efficient Light Field Reconstruction via Spatio-Angular Dense Network [14.568586050271357]
We propose an end-to-end Spatio-Angular Dense Network (SADenseNet) for light field reconstruction.
We show that the proposed SADenseNet's state-of-the-art performance at significantly reduced costs in memory and computation.
Results show that the reconstructed light field images are sharp with correct details and can serve as pre-processing to improve the accuracy of measurement related applications.
arXiv Detail & Related papers (2021-08-08T13:50:51Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Spatial--spectral FFPNet: Attention-Based Pyramid Network for
Segmentation and Classification of Remote Sensing Images [12.320585790097415]
In this study, we develop an attention-based pyramid network for segmentation and classification of remote sensing datasets.
Experiments conducted on ISPRS Vaihingen and ISPRS Potsdam high-resolution datasets demonstrate the competitive segmentation accuracy achieved by the proposed heavy-weight spatial FFPNet.
arXiv Detail & Related papers (2020-08-20T04:55:34Z) - High-Order Residual Network for Light Field Super-Resolution [39.93400777363467]
Plenoptic cameras usually sacrifice the spatial resolution of their SAIss to acquire information from different viewpoints.
We propose a novel high-order residual network to learn the geometric features hierarchically from the light field for reconstruction.
Our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
arXiv Detail & Related papers (2020-03-29T18:06:05Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.