APSNet: Attention Based Point Cloud Sampling
- URL: http://arxiv.org/abs/2210.05638v1
- Date: Tue, 11 Oct 2022 17:30:46 GMT
- Title: APSNet: Attention Based Point Cloud Sampling
- Authors: Yang Ye and Xiulong Yang and Shihao Ji
- Abstract summary: We develop an attention-based point cloud sampling network (APSNet) to tackle this problem.
Both supervised learning and knowledge distillation-based self-supervised learning of APSNet are proposed.
Experiments demonstrate the superior performance of APSNet against state-of-the-arts in various downstream tasks.
- Score: 0.7734726150561088
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Processing large point clouds is a challenging task. Therefore, the data is
often downsampled to a smaller size such that it can be stored, transmitted and
processed more efficiently without incurring significant performance
degradation. Traditional task-agnostic sampling methods, such as farthest point
sampling (FPS), do not consider downstream tasks when sampling point clouds,
and thus non-informative points to the tasks are often sampled. This paper
explores a task-oriented sampling for 3D point clouds, and aims to sample a
subset of points that are tailored specifically to a downstream task of
interest. Similar to FPS, we assume that point to be sampled next should depend
heavily on the points that have already been sampled. We thus formulate point
cloud sampling as a sequential generation process, and develop an
attention-based point cloud sampling network (APSNet) to tackle this problem.
At each time step, APSNet attends to all the points in a cloud by utilizing the
history of previously sampled points, and samples the most informative one.
Both supervised learning and knowledge distillation-based self-supervised
learning of APSNet are proposed. Moreover, joint training of APSNet over
multiple sample sizes is investigated, leading to a single APSNet that can
generate arbitrary length of samples with prominent performances. Extensive
experiments demonstrate the superior performance of APSNet against
state-of-the-arts in various downstream tasks, including 3D point cloud
classification, reconstruction, and registration.
Related papers
- Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Test-Time Augmentation for 3D Point Cloud Classification and
Segmentation [40.62640761825697]
Data augmentation is a powerful technique to enhance the performance of a deep learning task.
This work explores test-time augmentation (TTA) for 3D point clouds.
arXiv Detail & Related papers (2023-11-22T04:31:09Z) - Attention-based Point Cloud Edge Sampling [0.0]
Point cloud sampling is a less explored research topic for this data representation.
This paper proposes a non-generative Attention-based Point cloud Edge Sampling method (APES)
Both qualitative and quantitative experimental results show the superior performance of our sampling method on common benchmark tasks.
arXiv Detail & Related papers (2023-02-28T15:36:17Z) - AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point
Clouds [6.786701761788659]
We introduce the AU-PD, a novel task-aware sampling framework that directly downsamples point cloud to any smaller size.
We refine the pre-sampled set to make it task-aware, driven by downstream task losses.
With the attention mechanism and proper training scheme, the framework learns to adaptively refine the pre-sampled set of different sizes.
arXiv Detail & Related papers (2022-11-02T13:37:16Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Meta-Sampler: Almost-Universal yet Task-Oriented Sampling for Point
Clouds [46.33828400918886]
We show how we can train an almost-universal meta-sampler across multiple tasks.
This meta-sampler can then be rapidly fine-tuned when applied to different datasets, networks, or even different tasks.
arXiv Detail & Related papers (2022-03-30T02:21:34Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Beyond Farthest Point Sampling in Point-Wise Analysis [52.218037492342546]
We propose a novel data-driven sampler learning strategy for point-wise analysis tasks.
We learn sampling and downstream applications jointly.
Our experiments show that jointly learning of the sampler and task brings remarkable improvement over previous baseline methods.
arXiv Detail & Related papers (2021-07-09T08:08:44Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.