Monitoring Urban Forests from Auto-Generated Segmentation Maps
- URL: http://arxiv.org/abs/2206.06948v1
- Date: Tue, 14 Jun 2022 16:06:58 GMT
- Title: Monitoring Urban Forests from Auto-Generated Segmentation Maps
- Authors: Conrad M Albrecht, Chenying Liu, Yi Wang, Levente Klein, Xiao Xiang
Zhu
- Abstract summary: We present a weakly-supervised methodology to quantify the distribution of urban forests based on remotely sensed data with close-to-zero human interaction.
As proof of concept we sense Hurricane Sandy's impact on urban forests in Coney Island, New York City (NYC) and reference it to less impacted urban space in Brooklyn, NYC.
- Score: 16.520025438843433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present and evaluate a weakly-supervised methodology to quantify the
spatio-temporal distribution of urban forests based on remotely sensed data
with close-to-zero human interaction. Successfully training machine learning
models for semantic segmentation typically depends on the availability of
high-quality labels. We evaluate the benefit of high-resolution,
three-dimensional point cloud data (LiDAR) as source of noisy labels in order
to train models for the localization of trees in orthophotos. As proof of
concept we sense Hurricane Sandy's impact on urban forests in Coney Island, New
York City (NYC) and reference it to less impacted urban space in Brooklyn, NYC.
Related papers
- NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest [0.0]
We present a comparison of MLS and NeRF forest reconstructions for the purpose of trunk diameter estimation in a mixed-evergreen Redwood forest.
We propose an improved DBH-estimation method using convex-hull modeling.
arXiv Detail & Related papers (2024-10-09T20:32:15Z) - Towards general deep-learning-based tree instance segmentation models [0.0]
Deep-learning methods have been proposed which show the potential of learning to segment trees.
We use seven diverse datasets found in literature to gain insights into the generalization capabilities under domain-shift.
Our results suggest that a generalization from coniferous dominated sparse point clouds to deciduous dominated high-resolution point clouds is possible.
arXiv Detail & Related papers (2024-05-03T12:42:43Z) - Predicting urban tree cover from incomplete point labels and limited
background information [8.540501469749993]
Trees inside cities are important for the urban microclimate, contributing positively to the physical and mental health of the urban dwellers.
Despite their importance, often only limited information about city trees is available.
We propose a method for mapping urban trees in high-resolution aerial imagery using limited datasets and deep learning.
arXiv Detail & Related papers (2023-11-20T08:09:54Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Semantic segmentation of sparse irregular point clouds for leaf/wood
discrimination [1.4499463058550683]
We introduce a neural network model based on the Pointnet ++ architecture which makes use of point geometry only.
We show that our model outperforms state-of-the-art alternatives on UAV point clouds.
arXiv Detail & Related papers (2023-05-26T14:19:17Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - PANet: Perspective-Aware Network with Dynamic Receptive Fields and
Self-Distilling Supervision for Crowd Counting [63.84828478688975]
We propose a novel perspective-aware approach called PANet to address the perspective problem.
Based on the observation that the size of the objects varies greatly in one image due to the perspective effect, we propose the dynamic receptive fields (DRF) framework.
The framework is able to adjust the receptive field by the dilated convolution parameters according to the input image, which helps the model to extract more discriminative features for each local region.
arXiv Detail & Related papers (2021-10-31T04:43:05Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Hidden Footprints: Learning Contextual Walkability from 3D Human Trails [70.01257397390361]
Current datasets only tell you where people are, not where they could be.
We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints.
We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss.
arXiv Detail & Related papers (2020-08-19T23:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.