Sparse Convolutions on Continuous Domains for Point Cloud and Event
Stream Networks
- URL: http://arxiv.org/abs/2012.01170v1
- Date: Wed, 2 Dec 2020 13:05:02 GMT
- Title: Sparse Convolutions on Continuous Domains for Point Cloud and Event
Stream Networks
- Authors: Dominic Jack, Frederic Maire, Simon Denman, Anders Eriksson
- Abstract summary: We present an elegant sparse matrix-based interpretation of the convolution operator for unstructured continuous data like point clouds and event streams.
We demonstrate networks built with these operations can train an order of magnitude or more faster than top existing methods.
We also apply our operator to event stream processing, achieving state-of-the-art results on multiple tasks with streams of hundreds of thousands of events.
- Score: 14.664758777845572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image convolutions have been a cornerstone of a great number of deep learning
advances in computer vision. The research community is yet to settle on an
equivalent operator for sparse, unstructured continuous data like point clouds
and event streams however. We present an elegant sparse matrix-based
interpretation of the convolution operator for these cases, which is consistent
with the mathematical definition of convolution and efficient during training.
On benchmark point cloud classification problems we demonstrate networks built
with these operations can train an order of magnitude or more faster than top
existing methods, whilst maintaining comparable accuracy and requiring a tiny
fraction of the memory. We also apply our operator to event stream processing,
achieving state-of-the-art results on multiple tasks with streams of hundreds
of thousands of events.
Related papers
- PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - Sampling Streaming Data with Parallel Vector Quantization -- PVQ [0.0]
We present a vector quantization-based sampling method, which substantially reduces the class imbalance in data streams.
We built models using parallel processing, batch processing, and randomly selecting samples.
We show that the accuracy of classification models improves when the data streams are pre-processed with our method.
arXiv Detail & Related papers (2022-10-04T17:59:44Z) - Centroids Matching: an efficient Continual Learning approach operating
in the embedding space [15.705568893476947]
Catastrophic forgetting (CF) occurs when a neural network loses the information previously learned while training on a set of samples from a different distribution.
We propose a novel regularization method called Centroids Matching, that fights CF by operating in the feature space produced by the neural network.
arXiv Detail & Related papers (2022-08-03T13:17:16Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Improving the performance of bagging ensembles for data streams through
mini-batching [9.418151228755834]
Machine learning applications have to cope with dynamic environments where data are collected in the form of continuous data streams.
Stream processing algorithms have additional requirements regarding computational resources and adaptability to data evolution.
This paper proposes a mini-batching strategy that can improve memory access locality and performance of several ensemble algorithms for stream mining in multi-core environments.
arXiv Detail & Related papers (2021-12-18T03:44:07Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Differentiable Convolution Search for Point Cloud Processing [114.66038862207118]
We propose a novel differential convolution search paradigm on point clouds.
It can work in a purely data-driven manner and thus is capable of auto-creating a group of suitable convolutions for geometric shape modeling.
We also propose a joint optimization framework for simultaneous search of internal convolution and external architecture, and introduce epsilon-greedy algorithm to alleviate the effect of discretization error.
arXiv Detail & Related papers (2021-08-29T14:42:03Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.