Meta-Sampler: Almost-Universal yet Task-Oriented Sampling for Point
Clouds
- URL: http://arxiv.org/abs/2203.16001v1
- Date: Wed, 30 Mar 2022 02:21:34 GMT
- Title: Meta-Sampler: Almost-Universal yet Task-Oriented Sampling for Point
Clouds
- Authors: Ta-Ying Cheng, Qingyong Hu, Qian Xie, Niki Trigoni, Andrew Markham
- Abstract summary: We show how we can train an almost-universal meta-sampler across multiple tasks.
This meta-sampler can then be rapidly fine-tuned when applied to different datasets, networks, or even different tasks.
- Score: 46.33828400918886
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sampling is a key operation in point-cloud task and acts to increase
computational efficiency and tractability by discarding redundant points.
Universal sampling algorithms (e.g., Farthest Point Sampling) work without
modification across different tasks, models, and datasets, but by their very
nature are agnostic about the downstream task/model. As such, they have no
implicit knowledge about which points would be best to keep and which to
reject. Recent work has shown how task-specific point cloud sampling (e.g.,
SampleNet) can be used to outperform traditional sampling approaches by
learning which points are more informative. However, these learnable samplers
face two inherent issues: i) overfitting to a model rather than a task, and
\ii) requiring training of the sampling network from scratch, in addition to
the task network, somewhat countering the original objective of down-sampling
to increase efficiency. In this work, we propose an almost-universal sampler,
in our quest for a sampler that can learn to preserve the most useful points
for a particular task, yet be inexpensive to adapt to different tasks, models,
or datasets. We first demonstrate how training over multiple models for the
same task (e.g., shape reconstruction) significantly outperforms the vanilla
SampleNet in terms of accuracy by not overfitting the sample network to a
particular task network. Second, we show how we can train an almost-universal
meta-sampler across multiple tasks. This meta-sampler can then be rapidly
fine-tuned when applied to different datasets, networks, or even different
tasks, thus amortizing the initial cost of training.
Related papers
- AU-PD: An Arbitrary-size and Uniform Downsampling Framework for Point
Clouds [6.786701761788659]
We introduce the AU-PD, a novel task-aware sampling framework that directly downsamples point cloud to any smaller size.
We refine the pre-sampled set to make it task-aware, driven by downstream task losses.
With the attention mechanism and proper training scheme, the framework learns to adaptively refine the pre-sampled set of different sizes.
arXiv Detail & Related papers (2022-11-02T13:37:16Z) - APSNet: Attention Based Point Cloud Sampling [0.7734726150561088]
We develop an attention-based point cloud sampling network (APSNet) to tackle this problem.
Both supervised learning and knowledge distillation-based self-supervised learning of APSNet are proposed.
Experiments demonstrate the superior performance of APSNet against state-of-the-arts in various downstream tasks.
arXiv Detail & Related papers (2022-10-11T17:30:46Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Beyond Farthest Point Sampling in Point-Wise Analysis [52.218037492342546]
We propose a novel data-driven sampler learning strategy for point-wise analysis tasks.
We learn sampling and downstream applications jointly.
Our experiments show that jointly learning of the sampler and task brings remarkable improvement over previous baseline methods.
arXiv Detail & Related papers (2021-07-09T08:08:44Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.