Photo-to-Shape Material Transfer for Diverse Structures
- URL: http://arxiv.org/abs/2205.04018v1
- Date: Mon, 9 May 2022 03:37:01 GMT
- Title: Photo-to-Shape Material Transfer for Diverse Structures
- Authors: Ruizhen Hu, Xiangyu Su, Xiangkai Chen, Oliver Van Kaick, Hui Huang
- Abstract summary: We introduce a method for assigning photorealistic relightable materials to 3D shapes in an automatic manner.
Our method combines an image translation neural network with a material assignment neural network.
We demonstrate that our method allows us to assign materials to shapes so that their appearances better resemble the input exemplars.
- Score: 15.816608726698986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method for assigning photorealistic relightable materials to
3D shapes in an automatic manner. Our method takes as input a photo exemplar of
a real object and a 3D object with segmentation, and uses the exemplar to guide
the assignment of materials to the parts of the shape, so that the appearance
of the resulting shape is as similar as possible to the exemplar. To accomplish
this goal, our method combines an image translation neural network with a
material assignment neural network. The image translation network translates
the color from the exemplar to a projection of the 3D shape and the part
segmentation from the projection to the exemplar. Then, the material prediction
network assigns materials from a collection of realistic materials to the
projected parts, based on the translated images and perceptual similarity of
the materials. One key idea of our method is to use the translation network to
establish a correspondence between the exemplar and shape projection, which
allows us to transfer materials between objects with diverse structures.
Another key idea of our method is to use the two pairs of (color, segmentation)
images provided by the image translation to guide the material assignment,
which enables us to ensure the consistency in the assignment. We demonstrate
that our method allows us to assign materials to shapes so that their
appearances better resemble the input exemplars, improving the quality of the
results over the state-of-the-art method, and allowing us to automatically
create thousands of shapes with high-quality photorealistic materials. Code and
data for this paper are available at https://github.com/XiangyuSu611/TMT.
Related papers
- MaPa: Text-driven Photorealistic Material Painting for 3D Shapes [80.66880375862628]
This paper aims to generate materials for 3D meshes from text descriptions.
Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs.
Our framework supports high-quality rendering and provides substantial flexibility in editing.
arXiv Detail & Related papers (2024-04-26T17:54:38Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - Materialistic: Selecting Similar Materials in Images [30.85562156542794]
We present a method capable of selecting the regions of a photograph exhibiting the same material as an artist-chosen area.
Our proposed approach is robust to shading, specular highlights, and cast shadows, enabling selection in real images.
We demonstrate our model on a set of applications, including material editing, in-video selection, and retrieval of object photographs with similar materials.
arXiv Detail & Related papers (2023-05-22T17:50:48Z) - Neural Photometry-guided Visual Attribute Transfer [4.630419389180576]
We present a deep learning-based method for propagating visual material attributes to larger samples of the same or similar materials.
For training, we leverage images of the material taken under multiple illuminations and a dedicated data augmentation policy.
Our model relies on a supervised image-to-image translation framework and is agnostic to the transferred domain.
arXiv Detail & Related papers (2021-12-05T09:22:28Z) - Object Wake-up: 3-D Object Reconstruction, Animation, and in-situ
Rendering from a Single Image [58.69732754597448]
Given a picture of a chair, could we extract the 3-D shape of the chair, animate its plausible articulations and motions, and render in-situ in its original image space?
We devise an automated approach to extract and manipulate articulated objects in single images.
arXiv Detail & Related papers (2021-08-05T16:20:12Z) - ShaRF: Shape-conditioned Radiance Fields from a Single View [54.39347002226309]
We present a method for estimating neural scenes representations of objects given only a single image.
The core of our method is the estimation of a geometric scaffold for the object.
We demonstrate in several experiments the effectiveness of our approach in both synthetic and real images.
arXiv Detail & Related papers (2021-02-17T16:40:28Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Self-Supervised 2D Image to 3D Shape Translation with Disentangled
Representations [92.89846887298852]
We present a framework to translate between 2D image views and 3D object shapes.
We propose SIST, a Self-supervised Image to Shape Translation framework.
arXiv Detail & Related papers (2020-03-22T22:44:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.