GraNet: Global Relation-aware Attentional Network for ALS Point Cloud
Classification
- URL: http://arxiv.org/abs/2012.13466v1
- Date: Thu, 24 Dec 2020 23:54:45 GMT
- Title: GraNet: Global Relation-aware Attentional Network for ALS Point Cloud
Classification
- Authors: Rong Huang, Yusheng Xu, Uwe Stilla
- Abstract summary: We propose a novel neural network focusing on semantic labeling of ALS point clouds.
GraNet learns local geometric description and local dependencies.
Experiments were conducted on two ALS point cloud datasets.
- Score: 7.734726150561088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel neural network focusing on semantic labeling
of ALS point clouds, which investigates the importance of long-range spatial
and channel-wise relations and is termed as global relation-aware attentional
network (GraNet). GraNet first learns local geometric description and local
dependencies using a local spatial discrepancy attention convolution module
(LoSDA). In LoSDA, the orientation information, spatial distribution, and
elevation differences are fully considered by stacking several local spatial
geometric learning modules and the local dependencies are embedded by using an
attention pooling module. Then, a global relation-aware attention module (GRA),
consisting of a spatial relation-aware attention module (SRA) and a channel
relation aware attention module (CRA), are investigated to further learn the
global spatial and channel-wise relationship between any spatial positions and
feature vectors. The aforementioned two important modules are embedded in the
multi-scale network architecture to further consider scale changes in large
urban areas. We conducted comprehensive experiments on two ALS point cloud
datasets to evaluate the performance of our proposed framework. The results
show that our method can achieve higher classification accuracy compared with
other commonly used advanced classification methods. The overall accuracy (OA)
of our method on the ISPRS benchmark dataset can be improved to 84.5% to
classify nine semantic classes, with an average F1 measure (AvgF1) of 73.5%. In
detail, we have following F1 values for each object class: powerlines: 66.3%,
low vegetation: 82.8%, impervious surface: 91.8%, car: 80.7%, fence: 51.2%,
roof: 94.6%, facades: 62.1%, shrub: 49.9%, trees: 82.1%. Besides, experiments
were conducted using a new ALS point cloud dataset covering highly dense urban
areas.
Related papers
- On-the-fly Point Feature Representation for Point Clouds Analysis [7.074010861305738]
We propose On-the-fly Point Feature Representation (OPFR), which captures abundant geometric information explicitly through Curve Feature Generator module.
We also introduce the Local Reference Constructor module, which approximates the local coordinate systems based on triangle sets.
OPFR only requires extra 1.56ms for inference (65x faster than vanilla PFH) and 0.012M more parameters, and it can serve as a versatile plug-and-play module for various backbones.
arXiv Detail & Related papers (2024-07-31T04:57:06Z) - Salient Object Detection in Optical Remote Sensing Images Driven by
Transformer [69.22039680783124]
We propose a novel Global Extraction Local Exploration Network (GeleNet) for Optical Remote Sensing Images (ORSI-SOD)
Specifically, GeleNet first adopts a transformer backbone to generate four-level feature embeddings with global long-range dependencies.
Extensive experiments on three public datasets demonstrate that the proposed GeleNet outperforms relevant state-of-the-art methods.
arXiv Detail & Related papers (2023-09-15T07:14:43Z) - Spatial Layout Consistency for 3D Semantic Segmentation [0.7614628596146599]
We introduce a novel deep convolutional neural network (DCNN) technique for achieving voxel-based semantic segmentation of the ALTM's point clouds.
The suggested deep learning method, Semantic Utility Network (SUNet) is a multi-dimensional and multi-resolution network.
Our experiments demonstrated that SUNet's spatial layout consistency and a multi-resolution feature aggregation could significantly improve performance.
arXiv Detail & Related papers (2023-03-02T03:24:21Z) - Adaptive Edge-to-Edge Interaction Learning for Point Cloud Analysis [118.30840667784206]
Key issue for point cloud data processing is extracting useful information from local regions.
Previous works ignore the relation between edges in local regions, which encodes the local shape information.
This paper proposes a novel Adaptive Edge-to-Edge Interaction Learning module.
arXiv Detail & Related papers (2022-11-20T07:10:14Z) - Global Hierarchical Attention for 3D Point Cloud Analysis [88.56041763189162]
We propose a new attention mechanism, called Global Hierarchical Attention (GHA) for 3D point cloud analysis.
For the task of semantic segmentation, GHA gives a +1.7% mIoU increase to the MinkowskiEngine baseline on ScanNet.
For the 3D object detection task, GHA improves the CenterPoint baseline by +0.5% mAP on the nuScenes dataset.
arXiv Detail & Related papers (2022-08-07T19:16:30Z) - Beyond single receptive field: A receptive field
fusion-and-stratification network for airborne laser scanning point cloud
classification [14.706139194001773]
We propose a novel receptive field fusion-and-stratification network (RFFS-Net)
RFFS-Net is more adaptable to the classification of regions with complex structures and extreme scale variations in large-scale ALS point clouds.
Experiments on the LASDU dataset and the 2019 IEEE-GRSS Data Fusion Contest dataset show that RFFS-Net achieves a new state-of-the-art classification performance.
arXiv Detail & Related papers (2022-07-21T03:10:35Z) - L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly
Supervised Semantic Segmentation [67.26984058377435]
We present L2G, a simple online local-to-global knowledge transfer framework for high-quality object attention mining.
Our framework conducts the global network to learn the captured rich object detail knowledge from a global view.
Experiments show that our method attains 72.1% and 44.2% mIoU scores on the validation set of PASCAL VOC 2012 and MS COCO 2014.
arXiv Detail & Related papers (2022-04-07T04:31:32Z) - An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning [77.72330187258498]
We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
arXiv Detail & Related papers (2021-11-03T11:13:13Z) - Two Heads are Better than One: Geometric-Latent Attention for Point
Cloud Classification and Segmentation [10.2254921311882]
We present an innovative two-headed attention layer that combines geometric and latent features to segment a 3D scene into meaningful subsets.
Each head combines local and global information, using either the geometric or latent features, of a neighborhood of points and uses this information to learn better local relationships.
arXiv Detail & Related papers (2021-10-30T11:20:56Z) - LGENet: Local and Global Encoder Network for Semantic Segmentation of
Airborne Laser Scanning Point Clouds [17.840158282335874]
We present a local and global encoder network (LGENet) for semantic segmentation of ALS point clouds.
For the ISPRS benchmark dataset, our model achieves state-of-the-art results with an overall accuracy of 0.845 and an average F1 score of 0.737.
arXiv Detail & Related papers (2020-12-18T12:26:53Z) - Global Context-Aware Progressive Aggregation Network for Salient Object
Detection [117.943116761278]
We propose a novel network named GCPANet to integrate low-level appearance features, high-level semantic features, and global context features.
We show that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2020-03-02T04:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.