SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling
- URL: http://arxiv.org/abs/2208.06880v1
- Date: Sun, 14 Aug 2022 16:37:51 GMT
- Title: SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling
- Authors: Chenjian Gao, Qian Yu, Lu Sheng, Yi-Zhe Song, Dong Xu
- Abstract summary: Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
- Score: 75.957103837167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing a 3D shape based on a single sketch image is challenging due
to the large domain gap between a sparse, irregular sketch and a regular, dense
3D shape. Existing works try to employ the global feature extracted from sketch
to directly predict the 3D coordinates, but they usually suffer from losing
fine details that are not faithful to the input sketch. Through analyzing the
3D-to-2D projection process, we notice that the density map that characterizes
the distribution of 2D point clouds (i.e., the probability of points projected
at each location of the projection plane) can be used as a proxy to facilitate
the reconstruction process. To this end, we first translate a sketch via an
image translation network to a more informative 2D representation that can be
used to generate a density map. Next, a 3D point cloud is reconstructed via a
two-stage probabilistic sampling process: first recovering the 2D points (i.e.,
the x and y coordinates) by sampling the density map; and then predicting the
depth (i.e., the z coordinate) by sampling the depth values at the ray
determined by each 2D point. Extensive experiments are conducted, and both
quantitative and qualitative results show that our proposed approach
significantly outperforms other baseline methods.
Related papers
- MV2Cyl: Reconstructing 3D Extrusion Cylinders from Multi-View Images [13.255044855902408]
We present MV2Cyl, a novel method for reconstructing 3D from 2D multi-view images.
We achieve the optimal reconstruction result with the best accuracy in 2D sketch and extrude parameter estimation.
arXiv Detail & Related papers (2024-06-16T08:54:38Z) - EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale
Visual Localization [44.05930316729542]
We propose EP2P-Loc, a novel large-scale visual localization method for 3D point clouds.
To increase the number of inliers, we propose a simple algorithm to remove invisible 3D points in the image.
For the first time in this task, we employ a differentiable for end-to-end training.
arXiv Detail & Related papers (2023-09-14T07:06:36Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - TerrainMesh: Metric-Semantic Terrain Reconstruction from Aerial Images
Using Joint 2D-3D Learning [20.81202315793742]
This paper develops a joint 2D-3D learning approach to reconstruct a local metric-semantic mesh at each camera maintained by a visual odometry algorithm.
The mesh can be assembled into a global environment model to capture the terrain topology and semantics during online operation.
arXiv Detail & Related papers (2022-04-23T05:18:39Z) - Exploring Deep 3D Spatial Encodings for Large-Scale 3D Scene
Understanding [19.134536179555102]
We propose an alternative approach to overcome the limitations of CNN based approaches by encoding the spatial features of raw 3D point clouds into undirected graph models.
The proposed method achieves on par state-of-the-art accuracy with improved training time and model stability thus indicating strong potential for further research.
arXiv Detail & Related papers (2020-11-29T12:56:19Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z) - SeqXY2SeqZ: Structure Learning for 3D Shapes by Sequentially Predicting
1D Occupancy Segments From 2D Coordinates [61.04823927283092]
We propose to represent 3D shapes using 2D functions, where the output of the function at each 2D location is a sequence of line segments inside the shape.
We implement this approach using a Seq2Seq model with attention, called SeqXY2SeqZ, which learns the mapping from a sequence of 2D coordinates along two arbitrary axes to a sequence of 1D locations along the third axis.
Our experiments show that SeqXY2SeqZ outperforms the state-ofthe-art methods under widely used benchmarks.
arXiv Detail & Related papers (2020-03-12T00:24:36Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.