Shadow Art Revisited: A Differentiable Rendering Based Approach
- URL: http://arxiv.org/abs/2107.14539v1
- Date: Fri, 30 Jul 2021 10:43:48 GMT
- Title: Shadow Art Revisited: A Differentiable Rendering Based Approach
- Authors: Kaustubh Sadekar, Ashish Tiwari, Shanmuganathan Raman
- Abstract summary: We revisit shadow art using differentiable rendering based optimization frameworks to obtain the 3D sculpture.
Our choice of using differentiable rendering for generating shadow art sculptures can be attributed to its ability to learn the underlying 3D geometry solely from image data.
We demonstrate the generation of 3D sculptures to cast shadows of faces, animated movie characters, and applicability of the framework to sketch-based 3D reconstruction of underlying shapes.
- Score: 26.910401398827123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While recent learning based methods have been observed to be superior for
several vision-related applications, their potential in generating artistic
effects has not been explored much. One such interesting application is Shadow
Art - a unique form of sculptural art where 2D shadows cast by a 3D sculpture
produce artistic effects. In this work, we revisit shadow art using
differentiable rendering based optimization frameworks to obtain the 3D
sculpture from a set of shadow (binary) images and their corresponding
projection information. Specifically, we discuss shape optimization through
voxel as well as mesh-based differentiable renderers. Our choice of using
differentiable rendering for generating shadow art sculptures can be attributed
to its ability to learn the underlying 3D geometry solely from image data, thus
reducing the dependence on 3D ground truth. The qualitative and quantitative
results demonstrate the potential of the proposed framework in generating
complex 3D sculptures that go beyond those seen in contemporary art pieces
using just a set of shadow images as input. Further, we demonstrate the
generation of 3D sculptures to cast shadows of faces, animated movie
characters, and applicability of the framework to sketch-based 3D
reconstruction of underlying shapes.
Related papers
- Neural Shadow Art [10.23185004100584]
We introduce Neural Shadow Art, which leverages implicit function representations to expand the possibilities of shadow art.
Our method allows projections to match input binary images under various lighting directions and screen orientations.
Our approach proves valuable for industrial applications, demonstrating lower material usage and enhanced geometric smoothness.
arXiv Detail & Related papers (2024-11-28T14:03:30Z) - ART3D: 3D Gaussian Splatting for Text-Guided Artistic Scenes Generation [18.699440994076003]
ART3D is a novel framework that combines diffusion models and 3D Gaussian splatting techniques.
By leveraging depth information and an initial artistic image, we generate a point cloud map.
We also propose a depth consistency module to enhance 3D scene consistency.
arXiv Detail & Related papers (2024-05-17T03:19:36Z) - UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with Authenticity Guided Textures [80.047065473698]
We propose a novel 3D avatar generation approach termed UltrAvatar with enhanced fidelity of geometry, and superior quality of physically based rendering (PBR) textures without unwanted lighting.
We demonstrate the effectiveness and robustness of the proposed method, outperforming the state-of-the-art methods by a large margin in the experiments.
arXiv Detail & Related papers (2024-01-20T01:55:17Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Evolving Three Dimension (3D) Abstract Art: Fitting Concepts by Language [2.7336516660166295]
We propose to explore computational creativity in making abstract 3D art by bridging evolution strategies (ES) and 3D rendering through customizable parameterization of scenes.
Our approach is capable of placing semi-transparent triangles in 3D scenes that, when viewed from specified angles, render into films that look like artists' specification expressed in natural language.
arXiv Detail & Related papers (2023-04-24T07:47:48Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.