Deep Geometric Texture Synthesis
- URL: http://arxiv.org/abs/2007.00074v1
- Date: Tue, 30 Jun 2020 19:36:38 GMT
- Title: Deep Geometric Texture Synthesis
- Authors: Amir Hertz, Rana Hanocka, Raja Giryes, Daniel Cohen-Or
- Abstract summary: We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
- Score: 83.9404865744028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep generative adversarial networks for image generation have
advanced rapidly; yet, only a small amount of research has focused on
generative models for irregular structures, particularly meshes. Nonetheless,
mesh generation and synthesis remains a fundamental topic in computer graphics.
In this work, we propose a novel framework for synthesizing geometric textures.
It learns geometric texture statistics from local neighborhoods (i.e., local
triangular patches) of a single reference 3D model. It learns deep features on
the faces of the input triangulation, which is used to subdivide and generate
offsets across multiple scales, without parameterization of the reference or
target mesh. Our network displaces mesh vertices in any direction (i.e., in the
normal and tangential direction), enabling synthesis of geometric textures,
which cannot be expressed by a simple 2D displacement map. Learning and
synthesizing on local geometric patches enables a genus-oblivious framework,
facilitating texture transfer between shapes of different genus.
Related papers
- 3D-TexSeg: Unsupervised Segmentation of 3D Texture using Mutual
Transformer Learning [11.510823733292519]
This paper presents an original framework for the unsupervised segmentation of the 3D texture on the mesh manifold.
We devise a mutual transformer-based system comprising a label generator and a cleaner.
Experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and SOTA unsupervised techniques.
arXiv Detail & Related papers (2023-11-17T17:13:14Z) - SAGA: Spectral Adversarial Geometric Attack on 3D Meshes [13.84270434088512]
A triangular mesh is one of the most popular 3D data representations.
We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder.
arXiv Detail & Related papers (2022-11-24T19:29:04Z) - Zero-shot point cloud segmentation by transferring geometric primitives [68.18710039217336]
We investigate zero-shot point cloud semantic segmentation, where the network is trained on seen objects and able to segment unseen objects.
We propose a novel framework to learn the geometric primitives shared in seen and unseen categories' objects and employ a fine-grained alignment between language and the learned geometric primitives.
arXiv Detail & Related papers (2022-10-18T15:06:54Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Hard Example Generation by Texture Synthesis for Cross-domain Shape
Similarity Learning [97.56893524594703]
Image-based 3D shape retrieval (IBSR) aims to find the corresponding 3D shape of a given 2D image from a large 3D shape database.
metric learning with some adaptation techniques seems to be a natural solution to shape similarity learning.
We develop a geometry-focused multi-view metric learning framework empowered by texture synthesis.
arXiv Detail & Related papers (2020-10-23T08:52:00Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - Novel-View Human Action Synthesis [39.72702883597454]
We present a novel 3D reasoning to synthesize the target viewpoint.
We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh.
We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally.
arXiv Detail & Related papers (2020-07-06T15:11:51Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.