Efficient Long-Range Convolutions for Point Clouds
- URL: http://arxiv.org/abs/2010.05295v1
- Date: Sun, 11 Oct 2020 17:42:54 GMT
- Title: Efficient Long-Range Convolutions for Point Clouds
- Authors: Yifan Peng, Lin Lin, Lexing Ying and Leonardo Zepeda-N\'u\~nez
- Abstract summary: We present a novel neural network layer that directly incorporates long-range information for a point cloud.
The LRC-layer is a particularly powerful tool when combined with local convolution.
We showcase this framework by introducing a neural network architecture that combines LRC-layers with short-range convolutional layers.
- Score: 16.433511770049336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The efficient treatment of long-range interactions for point clouds is a
challenging problem in many scientific machine learning applications. To
extract global information, one usually needs a large window size, a large
number of layers, and/or a large number of channels. This can often
significantly increase the computational cost. In this work, we present a novel
neural network layer that directly incorporates long-range information for a
point cloud. This layer, dubbed the long-range convolutional (LRC)-layer,
leverages the convolutional theorem coupled with the non-uniform Fourier
transform. In a nutshell, the LRC-layer mollifies the point cloud to an
adequately sized regular grid, computes its Fourier transform, multiplies the
result by a set of trainable Fourier multipliers, computes the inverse Fourier
transform, and finally interpolates the result back to the point cloud. The
resulting global all-to-all convolution operation can be performed in
nearly-linear time asymptotically with respect to the number of input points.
The LRC-layer is a particularly powerful tool when combined with local
convolution as together they offer efficient and seamless treatment of both
short and long range interactions. We showcase this framework by introducing a
neural network architecture that combines LRC-layers with short-range
convolutional layers to accurately learn the energy and force associated with a
$N$-body potential. We also exploit the induced two-level decomposition and
propose an efficient strategy to train the combined architecture with a reduced
number of samples.
Related papers
- Fourier Controller Networks for Real-Time Decision-Making in Embodied Learning [42.862705980039784]
Transformer has shown promise in reinforcement learning to model time-varying features.
It still suffers from the issues of low data efficiency and high inference latency.
In this paper, we propose to investigate the task from a new perspective of the frequency domain.
arXiv Detail & Related papers (2024-05-30T09:43:59Z) - Learning Neural Volumetric Field for Point Cloud Geometry Compression [13.691147541041804]
We propose to code the geometry of a given point cloud by learning a neural field.
We divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.
The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.
arXiv Detail & Related papers (2022-12-11T19:55:24Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Region adaptive graph fourier transform for 3d point clouds [51.193111325231165]
We introduce the Region Adaptive Graph Fourier Transform (RA-GFT) for compression of 3D point cloud attributes.
The RA-GFT achieves better complexity-performance trade-offs than previous approaches.
arXiv Detail & Related papers (2020-03-04T02:47:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.