Learning Inter- and Intra-frame Representations for Non-Lambertian
Photometric Stereo
- URL: http://arxiv.org/abs/2012.13720v2
- Date: Wed, 30 Dec 2020 14:08:57 GMT
- Title: Learning Inter- and Intra-frame Representations for Non-Lambertian
Photometric Stereo
- Authors: Yanlong Cao, Binjie Ding, Zewei He, Jiangxin Yang, Jingxi Chen,
Yanpeng Cao and Xin Li
- Abstract summary: We build a two-stage Convolutional Neural Network (CNN) architecture to construct inter- and intra-frame representations.
We experimentally investigate numerous network design alternatives for identifying the optimal scheme to deploy inter-frame and intra-frame feature extraction modules.
- Score: 14.5172791293107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we build a two-stage Convolutional Neural Network (CNN)
architecture to construct inter- and intra-frame representations based on an
arbitrary number of images captured under different light directions,
performing accurate normal estimation of non-Lambertian objects. We
experimentally investigate numerous network design alternatives for identifying
the optimal scheme to deploy inter-frame and intra-frame feature extraction
modules for the photometric stereo problem. Moreover, we propose to utilize the
easily obtained object mask for eliminating adverse interference from invalid
background regions in intra-frame spatial convolutions, thus effectively
improve the accuracy of normal estimation for surfaces made of dark materials
or with cast shadows. Experimental results demonstrate that proposed masked
two-stage photometric stereo CNN model (MT-PS-CNN) performs favorably against
state-of-the-art photometric stereo techniques in terms of both accuracy and
efficiency. In addition, the proposed method is capable of predicting accurate
and rich surface normal details for non-Lambertian objects of complex geometry
and performs stably given inputs captured in both sparse and dense lighting
distributions.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Deep Photometric Stereo for Non-Lambertian Surfaces [89.05501463107673]
We introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN.
PS-FCN learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance.
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
arXiv Detail & Related papers (2020-07-26T15:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.