Modeling 3D Shapes by Reinforcement Learning
- URL: http://arxiv.org/abs/2003.12397v3
- Date: Thu, 17 Sep 2020 04:45:27 GMT
- Title: Modeling 3D Shapes by Reinforcement Learning
- Authors: Cheng Lin, Tingxiang Fan, Wenping Wang, Matthias Nie{\ss}ner
- Abstract summary: We propose a two-step neural framework based on RL to learn 3D modeling policies.
To effectively train the modeling agents, we introduce a novel training algorithm that combines policy, imitation learning and reinforcement learning.
Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models.
- Score: 33.343268605720176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore how to enable machines to model 3D shapes like human modelers
using deep reinforcement learning (RL). In 3D modeling software like Maya, a
modeler usually creates a mesh model in two steps: (1) approximating the shape
using a set of primitives; (2) editing the meshes of the primitives to create
detailed geometry. Inspired by such artist-based modeling, we propose a
two-step neural framework based on RL to learn 3D modeling policies. By taking
actions and collecting rewards in an interactive environment, the agents first
learn to parse a target shape into primitives and then to edit the geometry. To
effectively train the modeling agents, we introduce a novel training algorithm
that combines heuristic policy, imitation learning and reinforcement learning.
Our experiments show that the agents can learn good policies to produce regular
and structure-aware mesh models, which demonstrates the feasibility and
effectiveness of the proposed RL framework.
Related papers
- Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models [97.58685709663287]
generative pre-training can boost the performance of fundamental models in 2D vision.
In 3D vision, the over-reliance on Transformer-based backbones and the unordered nature of point clouds have restricted the further development of generative pre-training.
We propose a novel 3D-to-2D generative pre-training method that is adaptable to any point cloud model.
arXiv Detail & Related papers (2023-07-27T16:07:03Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - SNUG: Self-Supervised Neural Dynamic Garments [14.83072352654608]
We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies.
This allows us to learn models for interactive garments, including dynamic deformations and fine wrinkles, with two orders of magnitude speed up in training time.
arXiv Detail & Related papers (2022-04-05T13:50:21Z) - AutoPoly: Predicting a Polygonal Mesh Construction Sequence from a
Silhouette Image [17.915067368873018]
AutoPoly is a hybrid method that generates a polygonal mesh construction sequence from a silhouette image.
Our method can alter topology, whereas the recently proposed inverse shape estimation methods using differentiable rendering can only handle a fixed topology.
arXiv Detail & Related papers (2022-03-29T04:48:47Z) - Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis
and Analysis [143.22192229456306]
This paper proposes a deep 3D energy-based model to represent volumetric shapes.
The benefits of the proposed model are six-fold.
Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns.
arXiv Detail & Related papers (2020-12-25T06:09:36Z) - Discrete Point Flow Networks for Efficient Point Cloud Generation [36.03093265136374]
Generative models have proven effective at modeling 3D shapes and their statistical variations.
We introduce a latent variable model that builds on normalizing flows to generate 3D point clouds of an arbitrary size.
For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.
arXiv Detail & Related papers (2020-07-20T14:48:00Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z) - Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured
2D Data [19.807173910379966]
We introduce the first scalable training technique for 3D generative models from 2D data.
We show that our model can consistently learn to generate better shapes than existing models when trained with exclusively unstructured 2D images.
arXiv Detail & Related papers (2020-02-28T12:28:12Z) - PolyGen: An Autoregressive Generative Model of 3D Meshes [22.860421649320287]
We present an approach which models the mesh directly using a Transformer-based architecture.
Our model can condition on a range of inputs, including object classes, voxels, and images.
We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task.
arXiv Detail & Related papers (2020-02-23T17:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.