HyperFlow: Representing 3D Objects as Surfaces
- URL: http://arxiv.org/abs/2006.08710v1
- Date: Mon, 15 Jun 2020 19:18:02 GMT
- Title: HyperFlow: Representing 3D Objects as Surfaces
- Authors: Przemys{\l}aw Spurek, Maciej Zi\k{e}ba, Jacek Tabor, Tomasz
Trzci\'nski
- Abstract summary: We present a novel generative model that leverages hypernetworks to create continuous 3D object representations in a form of lightweight surfaces (meshes) directly out of point clouds.
We obtain continuous mesh-based object representations that yield better qualitative results than competing approaches.
- Score: 19.980044265074298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present HyperFlow - a novel generative model that leverages
hypernetworks to create continuous 3D object representations in a form of
lightweight surfaces (meshes), directly out of point clouds. Efficient object
representations are essential for many computer vision applications, including
robotic manipulation and autonomous driving. However, creating those
representations is often cumbersome, because it requires processing unordered
sets of point clouds. Therefore, it is either computationally expensive, due to
additional optimization constraints such as permutation invariance, or leads to
quantization losses introduced by binning point clouds into discrete voxels.
Inspired by mesh-based representations of objects used in computer graphics, we
postulate a fundamentally different approach and represent 3D objects as a
family of surfaces. To that end, we devise a generative model that uses a
hypernetwork to return the weights of a Continuous Normalizing Flows (CNF)
target network. The goal of this target network is to map points from a
probability distribution into a 3D mesh. To avoid numerical instability of the
CNF on compact support distributions, we propose a new Spherical Log-Normal
function which models density of 3D points around object surfaces mimicking
noise introduced by 3D capturing devices. As a result, we obtain continuous
mesh-based object representations that yield better qualitative results than
competing approaches, while reducing training time by over an order of
magnitude.
Related papers
- StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Points2NeRF: Generating Neural Radiance Fields from 3D point cloud [0.0]
We propose representing 3D objects as Neural Radiance Fields (NeRFs)
We leverage a hypernetwork paradigm and train the model to take a 3D point cloud with the associated color values.
Our method provides efficient 3D object representation and offers several advantages over the existing approaches.
arXiv Detail & Related papers (2022-06-02T20:23:33Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - HyperCube: Implicit Field Representations of Voxelized 3D Models [18.868266675878996]
We introduce a new HyperCube architecture that enables direct processing of 3D voxels.
Instead of processing individual 3D samples from within a voxel, our approach allows to input the entire voxel represented with its convex hull coordinates.
arXiv Detail & Related papers (2021-10-12T06:56:48Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Local Grid Rendering Networks for 3D Object Detection in Point Clouds [98.02655863113154]
CNNs are powerful but it would be computationally costly to directly apply convolutions on point data after voxelizing the entire point clouds to a dense regular 3D grid.
We propose a novel and principled Local Grid Rendering (LGR) operation to render the small neighborhood of a subset of input points into a low-resolution 3D grid independently.
We validate LGR-Net for 3D object detection on the challenging ScanNet and SUN RGB-D datasets.
arXiv Detail & Related papers (2020-07-04T13:57:43Z) - Hypernetwork approach to generating point clouds [18.67883065951206]
We build a hyper network that returns weights of a particular neural network trained to map points into a 3D shape.
A particular 3D shape can be generated using point-by-point sampling from the assumed prior distribution.
Since the hyper network is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered a parametrization of the surface of a 3D shape.
arXiv Detail & Related papers (2020-02-10T11:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.