DeepShadow: Neural Shape from Shadow
- URL: http://arxiv.org/abs/2203.15065v1
- Date: Mon, 28 Mar 2022 20:11:15 GMT
- Title: DeepShadow: Neural Shape from Shadow
- Authors: Asaf Karnieli, Ohad Fried, Yacov Hel-Or
- Abstract summary: DeepShadow is a one-shot method for recovering the depth map and surface normals from photometric stereo shadow maps.
We show that the self and cast shadows not only do not disturb 3D reconstruction, but can be used alone, as a strong learning signal.
Our method is the first to reconstruct 3D shape-from-shadows using neural networks.
- Score: 12.283891012446647
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents DeepShadow, a one-shot method for recovering the depth
map and surface normals from photometric stereo shadow maps. Previous works
that try to recover the surface normals from photometric stereo images treat
cast shadows as a disturbance. We show that the self and cast shadows not only
do not disturb 3D reconstruction, but can be used alone, as a strong learning
signal, to recover the depth map and surface normals. We demonstrate that 3D
reconstruction from shadows can even outperform shape-from-shading in certain
cases. To the best of our knowledge, our method is the first to reconstruct 3D
shape-from-shadows using neural networks. The method does not require any
pre-training or expensive labeled data, and is optimized during inference time.
Related papers
- A Perceptual Shape Loss for Monocular 3D Face Reconstruction [13.527078921914985]
We propose a new loss function for monocular face capture inspired by how humans would perceive the quality of a 3D face reconstruction.
Our loss is implemented as a discriminator-style neural network that takes an input face image and a shaded render of the geometry estimate.
We show how our new perceptual shape loss can be combined with traditional energy terms for monocular 3D face optimization and deep neural network regression.
arXiv Detail & Related papers (2023-10-30T14:39:11Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [77.53134858717728]
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF)
We apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.
We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
arXiv Detail & Related papers (2022-06-13T17:37:50Z) - OutCast: Outdoor Single-image Relighting with Cast Shadows [19.354412901507175]
We propose a relighting method for outdoor images.
Our method mainly focuses on predicting cast shadows in arbitrary novel lighting directions from a single image.
Our proposed method achieves, for the first time, state-of-the-art relighting results, with only a single image as input.
arXiv Detail & Related papers (2022-04-20T09:24:14Z) - Towards Learning Neural Representations from Shadows [11.60149896896201]
We present a method that learns neural scene representations from only shadows present in the scene.
Our framework is highly generalizable and can work alongside existing 3D reconstruction techniques.
arXiv Detail & Related papers (2022-03-29T23:13:41Z) - Neural Reflectance for Shape Recovery with Shadow Handling [88.67603644930466]
This paper aims at recovering the shape of a scene with unknown, non-Lambertian, and possibly spatially-varying surface materials.
We propose a coordinate-based deep reflectance (multilayer perceptron) to parameterize both the unknown 3D shape and the unknown at every surface point.
This network is able to leverage the observed photometric variance and shadows on the surface, and recover both surface shape and general non-Lambertian reflectance.
arXiv Detail & Related papers (2022-03-24T07:57:20Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.