Benchmarking Deep Learning Architectures for Urban Vegetation Point Cloud Semantic Segmentation from MLS
- URL: http://arxiv.org/abs/2306.10274v3
- Date: Wed, 1 May 2024 13:38:59 GMT
- Title: Benchmarking Deep Learning Architectures for Urban Vegetation Point Cloud Semantic Segmentation from MLS
- Authors: Aditya Aditya, Bharat Lohani, Jagannath Aryal, Stephan Winter,
- Abstract summary: Vegetation is crucial for sustainable and resilient cities providing ecosystem services and well-being of humans.
Recently, deep learning for point cloud semantic segmentation has shown significant progress.
We provide an assessment of point-based deep learning models for semantic segmentation of vegetation class.
- Score: 0.5999777817331315
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Vegetation is crucial for sustainable and resilient cities providing various ecosystem services and well-being of humans. However, vegetation is under critical stress with rapid urbanization and expanding infrastructure footprints. Consequently, mapping of this vegetation is essential in the urban environment. Recently, deep learning for point cloud semantic segmentation has shown significant progress. Advanced models attempt to obtain state-of-the-art performance on benchmark datasets, comprising multiple classes and representing real world scenarios. However, class specific segmentation with respect to vegetation points has not been explored. Therefore, selection of a deep learning model for vegetation points segmentation is ambiguous. To address this problem, we provide a comprehensive assessment of point-based deep learning models for semantic segmentation of vegetation class. We have selected seven representative point-based models, namely PointCNN, KPConv (omni-supervised), RandLANet, SCFNet, PointNeXt, SPoTr and PointMetaBase. These models are investigated on three different datasets, specifically Chandigarh, Toronto3D and Kerala, which are characterized by diverse nature of vegetation and varying scene complexity combined with changing per-point features and class-wise composition. PointMetaBase and KPConv (omni-supervised) achieve the highest mIoU on the Chandigarh (95.24%) and Toronto3D datasets (91.26%), respectively while PointCNN provides the highest mIoU on the Kerala dataset (85.68%). The paper develops a deeper insight, hitherto not reported, into the working of these models for vegetation segmentation and outlines the ingredients that should be included in a model specifically for vegetation segmentation. This paper is a step towards the development of a novel architecture for vegetation points segmentation.
Related papers
- SegmentAnyTree: A sensor and platform agnostic deep learning model for
tree segmentation using laser scanning data [15.438892555484616]
This research advances individual tree crown (ITC) segmentation in lidar data, using a deep learning model applicable to various laser scanning types.
It addresses the challenge of transferability across different data characteristics in 3D forest scene analysis.
The model, based on PointGroup architecture, is a 3D CNN with separate heads for semantic and instance segmentation.
arXiv Detail & Related papers (2024-01-28T19:47:17Z) - Point Cloud Segmentation Using Transfer Learning with RandLA-Net: A Case
Study on Urban Areas [0.5242869847419834]
This paper presents the application of RandLA-Net, a state-of-the-art neural network architecture, for the 3D segmentation of large-scale point cloud data in urban areas.
The study focuses on three major Chinese cities, namely Chengdu, Jiaoda, and Shenzhen, leveraging their unique characteristics to enhance segmentation performance.
arXiv Detail & Related papers (2023-12-19T06:13:58Z) - Adaptive Edge-to-Edge Interaction Learning for Point Cloud Analysis [118.30840667784206]
Key issue for point cloud data processing is extracting useful information from local regions.
Previous works ignore the relation between edges in local regions, which encodes the local shape information.
This paper proposes a novel Adaptive Edge-to-Edge Interaction Learning module.
arXiv Detail & Related papers (2022-11-20T07:10:14Z) - PST: Plant segmentation transformer for 3D point clouds of rapeseed
plants at the podding stage [5.010317705589445]
deep learning network plant segmentation transformer (PST)
PST is composed of: (i) a dynamic voxel feature encoder (DVFE) to aggregate the point features with the raw spatial resolution; (ii) a dual window sets attention blocks to capture contextual information; and (iii) a dense feature propagation module to obtain the final dense point feature map.
Results: PST and PST-PointGroup (PG) achieved superior performance in semantic and instance segmentation tasks.
arXiv Detail & Related papers (2022-06-27T06:56:48Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - A Nearest Neighbor Network to Extract Digital Terrain Models from 3D
Point Clouds [1.6249267147413524]
We present an algorithm that operates on 3D-point clouds and estimates the underlying DTM for the scene using an end-to-end approach.
Our model learns neighborhood information and seamlessly integrates this with point-wise and block-wise global features.
arXiv Detail & Related papers (2020-05-21T15:54:55Z) - Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation
of Urban Roadways [31.619465114439667]
Toronto-3D is a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation.
This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes.
Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively.
arXiv Detail & Related papers (2020-03-18T15:45:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.