AutoPoly: Predicting a Polygonal Mesh Construction Sequence from a
Silhouette Image
- URL: http://arxiv.org/abs/2203.15233v1
- Date: Tue, 29 Mar 2022 04:48:47 GMT
- Title: AutoPoly: Predicting a Polygonal Mesh Construction Sequence from a
Silhouette Image
- Authors: I-Chao Shen, Yu Ju Chen, Oliver van Kaick, Takeo Igarashi
- Abstract summary: AutoPoly is a hybrid method that generates a polygonal mesh construction sequence from a silhouette image.
Our method can alter topology, whereas the recently proposed inverse shape estimation methods using differentiable rendering can only handle a fixed topology.
- Score: 17.915067368873018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polygonal modeling is a core task of content creation in Computer Graphics.
The complexity of modeling, in terms of the number and the order of operations
and time required to execute them makes it challenging to learn and execute.
Our goal is to automatically derive a polygonal modeling sequence for a given
target. Then, one can learn polygonal modeling by observing the resulting
sequence and also expedite the modeling process by starting from the
auto-generated result. As a starting point for building a system for 3D
modeling in the future, we tackle the 2D shape modeling problem and present
AutoPoly, a hybrid method that generates a polygonal mesh construction sequence
from a silhouette image. The key idea of our method is the use of the Monte
Carlo tree search (MCTS) algorithm and differentiable rendering to separately
predict sequential topological actions and geometric actions. Our hybrid method
can alter topology, whereas the recently proposed inverse shape estimation
methods using differentiable rendering can only handle a fixed topology. Our
novel reward function encourages MCTS to select topological actions that lead
to a simpler shape without self-intersection. We further designed two deep
learning-based methods to improve the expansion and simulation steps in the
MCTS search process: an $n$-step "future action prediction" network (nFAP-Net)
to generate candidates for potential topological actions, and a shape warping
network (WarpNet) to predict polygonal shapes given the predicted rendered
images and topological actions. We demonstrate the efficiency of our method on
2D polygonal shapes of multiple man-made object categories.
Related papers
- Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Geo-SIC: Learning Deformable Geometric Shapes in Deep Image Classifiers [8.781861951759948]
This paper presents Geo-SIC, the first deep learning model to learn deformable shapes in a deformation space for an improved performance of image classification.
We introduce a newly designed framework that (i) simultaneously derives features from both image and latent shape spaces with large intra-class variations.
We develop a boosted classification network, equipped with an unsupervised learning of geometric shape representations.
arXiv Detail & Related papers (2022-10-25T01:55:17Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Automated LoD-2 Model Reconstruction from Very-HighResolution
Satellite-derived Digital Surface Model and Orthophoto [1.2691047660244335]
We propose a model-driven method that reconstructs LoD-2 building models following a "decomposition-optimization-fitting" paradigm.
Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results.
arXiv Detail & Related papers (2021-09-08T19:03:09Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Modeling 3D Shapes by Reinforcement Learning [33.343268605720176]
We propose a two-step neural framework based on RL to learn 3D modeling policies.
To effectively train the modeling agents, we introduce a novel training algorithm that combines policy, imitation learning and reinforcement learning.
Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models.
arXiv Detail & Related papers (2020-03-27T13:05:39Z) - PolyGen: An Autoregressive Generative Model of 3D Meshes [22.860421649320287]
We present an approach which models the mesh directly using a Transformer-based architecture.
Our model can condition on a range of inputs, including object classes, voxels, and images.
We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task.
arXiv Detail & Related papers (2020-02-23T17:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.