Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization
- URL: http://arxiv.org/abs/2010.08276v2
- Date: Fri, 29 Apr 2022 05:37:30 GMT
- Title: Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization
- Authors: Biao Zhang, Peter Wonka
- Abstract summary: We propose a novel 3d shape representation for 3d shape reconstruction from a single image.
We train a network to generate a training set which will be fed into another learning algorithm to define the shape.
- Score: 52.17872739634213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel 3d shape representation for 3d shape reconstruction from a
single image. Rather than predicting a shape directly, we train a network to
generate a training set which will be fed into another learning algorithm to
define the shape. The nested optimization problem can be modeled by bi-level
optimization. Specifically, the algorithms for bi-level optimization are also
being used in meta learning approaches for few-shot learning. Our framework
establishes a link between 3D shape analysis and few-shot learning. We combine
training data generating networks with bi-level optimization algorithms to
obtain a complete framework for which all components can be jointly trained. We
improve upon recent work on standard benchmarks for 3d shape reconstruction.
Related papers
- CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization [56.47175002368553]
This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space.
First, we design the coupled neural shape representation for supporting 3D shape editing.
Second, we formulate the coupled neural shape optimization procedure to co-optimize the two coupled components in the representation subject to the editing operation.
arXiv Detail & Related papers (2024-02-04T01:52:56Z) - Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections
with Deep Reinforcement Learning [0.0]
We present to the best of our knowledge the first 3D shape reconstruction network to solve this task.
Our method is based on applying a Reinforcement Learning algorithm to learn how to effectively parse the shape.
arXiv Detail & Related papers (2022-10-22T17:48:12Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Multi-initialization Optimization Network for Accurate 3D Human Pose and
Shape Estimation [75.44912541912252]
We propose a three-stage framework named Multi-Initialization Optimization Network (MION)
In the first stage, we strategically select different coarse 3D reconstruction candidates which are compatible with the 2D keypoints of input sample.
In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.
Finally, a Consistency Estimation Network (CEN) is proposed to find the best result from mutiple candidates by evaluating if the visual evidence in RGB image matches a given 3D reconstruction.
arXiv Detail & Related papers (2021-12-24T02:43:58Z) - Learning Compositional Shape Priors for Few-Shot 3D Reconstruction [36.40776735291117]
We show that complex encoder-decoder architectures exploit large amounts of per-category data.
We propose three ways to learn a class-specific global shape prior, directly from data.
Experiments on the popular ShapeNet dataset show that our method outperforms a zero-shot baseline by over 40%.
arXiv Detail & Related papers (2021-06-11T14:55:49Z) - Human Body Model Fitting by Learned Gradient Descent [48.79414884222403]
We propose a novel algorithm for the fitting of 3D human shape to images.
We show that this algorithm is fast (avg. 120ms convergence), robust to dataset, and achieves state-of-the-art results on public evaluation datasets.
arXiv Detail & Related papers (2020-08-19T14:26:47Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.