Hypernetwork approach to generating point clouds
- URL: http://arxiv.org/abs/2003.00802v2
- Date: Tue, 13 Oct 2020 19:18:59 GMT
- Title: Hypernetwork approach to generating point clouds
- Authors: Przemys{\l}aw Spurek, Sebastian Winczowski, Jacek Tabor, Maciej
Zamorski, Maciej Zi\k{e}ba, Tomasz Trzci\'nski
- Abstract summary: We build a hyper network that returns weights of a particular neural network trained to map points into a 3D shape.
A particular 3D shape can be generated using point-by-point sampling from the assumed prior distribution.
Since the hyper network is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered a parametrization of the surface of a 3D shape.
- Score: 18.67883065951206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel method for generating 3D point clouds that
leverage properties of hyper networks. Contrary to the existing methods that
learn only the representation of a 3D object, our approach simultaneously finds
a representation of the object and its 3D surface. The main idea of our
HyperCloud method is to build a hyper network that returns weights of a
particular neural network (target network) trained to map points from a uniform
unit ball distribution into a 3D shape. As a consequence, a particular 3D shape
can be generated using point-by-point sampling from the assumed prior
distribution and transforming sampled points with the target network. Since the
hyper network is based on an auto-encoder architecture trained to reconstruct
realistic 3D shapes, the target network weights can be considered a
parametrization of the surface of a 3D shape, and not a standard representation
of point cloud usually returned by competitive approaches. The proposed
architecture allows finding mesh-based representation of 3D objects in a
generative manner while providing point clouds en pair in quality with the
state-of-the-art methods.
Related papers
- LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - HyperCube: Implicit Field Representations of Voxelized 3D Models [18.868266675878996]
We introduce a new HyperCube architecture that enables direct processing of 3D voxels.
Instead of processing individual 3D samples from within a voxel, our approach allows to input the entire voxel represented with its convex hull coordinates.
arXiv Detail & Related papers (2021-10-12T06:56:48Z) - DeformerNet: A Deep Learning Approach to 3D Deformable Object
Manipulation [5.733365759103406]
We propose a novel approach to 3D deformable object manipulation leveraging a deep neural network called DeformerNet.
We explicitly use 3D point clouds as the state representation and apply Convolutional Neural Network on point clouds to learn the 3D features.
Once trained in an end-to-end fashion, DeformerNet directly maps the current point cloud of a deformable object, as well as a target point cloud shape, to the desired displacement in robot gripper position.
arXiv Detail & Related papers (2021-07-16T18:20:58Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - HyperFlow: Representing 3D Objects as Surfaces [19.980044265074298]
We present a novel generative model that leverages hypernetworks to create continuous 3D object representations in a form of lightweight surfaces (meshes) directly out of point clouds.
We obtain continuous mesh-based object representations that yield better qualitative results than competing approaches.
arXiv Detail & Related papers (2020-06-15T19:18:02Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.