FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network
Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling
- URL: http://arxiv.org/abs/2012.09439v1
- Date: Thu, 17 Dec 2020 08:20:09 GMT
- Title: FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network
Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling
- Authors: Kangcheng Liu, Zhi Gao, Feng Lin, and Ben M. Chen
- Abstract summary: FG-Net is a general deep learning framework for large-scale point clouds understanding without voxelizations.
We propose a deep convolutional neural network leveraging correlated feature mining and deformable convolution based geometric-aware modelling.
Our approaches outperform state-of-the-art approaches in terms of accuracy and efficiency.
- Score: 15.059508985699575
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This work presents FG-Net, a general deep learning framework for large-scale
point clouds understanding without voxelizations, which achieves accurate and
real-time performance with a single NVIDIA GTX 1080 GPU. First, a novel noise
and outlier filtering method is designed to facilitate subsequent high-level
tasks. For effective understanding purpose, we propose a deep convolutional
neural network leveraging correlated feature mining and deformable convolution
based geometric-aware modelling, in which the local feature relationships and
geometric patterns can be fully exploited. For the efficiency issue, we put
forward an inverse density sampling operation and a feature pyramid based
residual learning strategy to save the computational cost and memory
consumption respectively. Extensive experiments on real-world challenging
datasets demonstrated that our approaches outperform state-of-the-art
approaches in terms of accuracy and efficiency. Moreover, weakly supervised
transfer learning is also conducted to demonstrate the generalization capacity
of our method.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - How Feature Learning Can Improve Neural Scaling Laws [86.9540615081759]
We develop a solvable model of neural scaling laws beyond the kernel limit.
We show how performance scales with model size, training time, and the total amount of available data.
arXiv Detail & Related papers (2024-09-26T14:05:32Z) - Ada-HGNN: Adaptive Sampling for Scalable Hypergraph Neural Networks [19.003370580994936]
We introduce a new adaptive sampling strategy specifically designed for hypergraphs, which tackles their unique complexities in an efficient manner.
We also present a Random Hyperedge Augmentation (RHA) technique and an additional Multilayer Perceptron (MLP) module to improve the robustness and capabilities of our approach.
arXiv Detail & Related papers (2024-05-22T06:15:50Z) - An effective and efficient green federated learning method for one-layer
neural networks [0.22499166814992436]
Federated learning (FL) is one of the most active research lines in machine learning.
We present a FL method, based on a neural network without hidden layers, capable of generating a global collaborative model in a single training round.
We show that the method performs equally well in both identically and non-identically distributed scenarios.
arXiv Detail & Related papers (2023-12-22T08:52:08Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - CONetV2: Efficient Auto-Channel Size Optimization for CNNs [35.951376988552695]
This work introduces a method that is efficient in computationally constrained environments by examining the micro-search space of channel size.
In tackling channel-size optimization, we design an automated algorithm to extract the dependencies within different connected layers of the network.
We also introduce a novel metric that highly correlates with test accuracy and enables analysis of individual network layers.
arXiv Detail & Related papers (2021-10-13T16:17:19Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.