Point Set Self-Embedding
- URL: http://arxiv.org/abs/2202.13577v1
- Date: Mon, 28 Feb 2022 07:03:33 GMT
- Title: Point Set Self-Embedding
- Authors: Ruihui Li, Xianzhi Li, Tien-Tsin Wong, and Chi-Wing Fu
- Abstract summary: This work presents an innovative method for point set self-embedding, that encodes structural information of a dense point set into its sparser version in a visual but imperceptible form.
The self-embedded point set can function as the ordinary downsampled one and be visualized efficiently on mobile devices.
We can leverage the self-embedded information to fully restore the original point set for detailed analysis on remote servers.
- Score: 63.23565826873297
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work presents an innovative method for point set self-embedding, that
encodes the structural information of a dense point set into its sparser
version in a visual but imperceptible form. The self-embedded point set can
function as the ordinary downsampled one and be visualized efficiently on
mobile devices. Particularly, we can leverage the self-embedded information to
fully restore the original point set for detailed analysis on remote servers.
This task is challenging since both the self-embedded point set and the
restored point set should resemble the original one. To achieve a learnable
self-embedding scheme, we design a novel framework with two jointly-trained
networks: one to encode the input point set into its self-embedded sparse point
set and the other to leverage the embedded information for inverting the
original point set back. Further, we develop a pair of up-shuffle and
down-shuffle units in the two networks, and formulate loss terms to encourage
the shape similarity and point distribution in the results. Extensive
qualitative and quantitative results demonstrate the effectiveness of our
method on both synthetic and real-scanned datasets.
Related papers
- Decoupled Sparse Priors Guided Diffusion Compression Model for Point Clouds [26.32608616696905]
Lossy compression methods rely on an autoencoder to transform a point cloud into latent points for storage.
We propose a sparse priors guided method that achieves high reconstruction quality, especially at high compression ratios.
arXiv Detail & Related papers (2024-11-21T05:41:35Z) - SVDFormer: Complementing Point Cloud via Self-view Augmentation and
Self-structure Dual-generator [30.483163963846206]
We propose a novel network, SVDFormer, to tackle two specific challenges in point cloud completion.
We first design a Self-view Fusion Network that leverages multiple-view depth image information to observe incomplete self-shape.
We then introduce a refinement module, called Self-structure Dual-generator, in which we incorporate learned shape priors and geometric self-similarities for producing new points.
arXiv Detail & Related papers (2023-07-17T13:55:31Z) - Self-positioning Point-based Transformer for Point Cloud Understanding [18.394318824968263]
Self-Positioning point-based Transformer (SPoTr) is designed to capture both local and global shape contexts with reduced complexity.
SPoTr achieves an accuracy gain of 2.6% over the previous best models on shape classification with ScanObjectNN.
arXiv Detail & Related papers (2023-03-29T04:27:11Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z) - Point-Set Anchors for Object Detection, Instance Segmentation and Pose
Estimation [85.96410825961966]
We argue that the image features extracted at a central point contain limited information for predicting distant keypoints or bounding box boundaries.
To facilitate inference, we propose to instead perform regression from a set of points placed at more advantageous positions.
We apply this proposed framework, called Point-Set Anchors, to object detection, instance segmentation, and human pose estimation.
arXiv Detail & Related papers (2020-07-06T15:59:56Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.