OAM-TCD: A globally diverse dataset of high-resolution tree cover maps
- URL: http://arxiv.org/abs/2407.11743v1
- Date: Tue, 16 Jul 2024 14:11:29 GMT
- Title: OAM-TCD: A globally diverse dataset of high-resolution tree cover maps
- Authors: Josh Veitch-Michaelis, Andrew Cottam, Daniella Schweizer, Eben N. Broadbent, David Dao, Ce Zhang, Angelica Almeyda Zambrano, Simeon Max,
- Abstract summary: We present a novel open-access dataset for individual tree crown delineation (TCD) in high-resolution aerial imagery sourced from OpenMap (OAM)
Our dataset, OAM-TCD, comprises 5072 2048x2048px images at 10 cm/px resolution with associated human-labeled instance masks for over 280k individual and 56k groups of trees.
Using our dataset, we train reference instance and semantic segmentation models that compare favorably to existing state-of-the-art models.
- Score: 8.336960607169175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately quantifying tree cover is an important metric for ecosystem monitoring and for assessing progress in restored sites. Recent works have shown that deep learning-based segmentation algorithms are capable of accurately mapping trees at country and continental scales using high-resolution aerial and satellite imagery. Mapping at high (ideally sub-meter) resolution is necessary to identify individual trees, however there are few open-access datasets containing instance level annotations and those that exist are small or not geographically diverse. We present a novel open-access dataset for individual tree crown delineation (TCD) in high-resolution aerial imagery sourced from OpenAerialMap (OAM). Our dataset, OAM-TCD, comprises 5072 2048x2048 px images at 10 cm/px resolution with associated human-labeled instance masks for over 280k individual and 56k groups of trees. By sampling imagery from around the world, we are able to better capture the diversity and morphology of trees in different terrestrial biomes and in both urban and natural environments. Using our dataset, we train reference instance and semantic segmentation models that compare favorably to existing state-of-the-art models. We assess performance through k-fold cross-validation and comparison with existing datasets; additionally we demonstrate compelling results on independent aerial imagery captured over Switzerland and compare to municipal tree inventories and LIDAR-derived canopy maps in the city of Zurich. Our dataset, models and training/benchmark code are publicly released under permissive open-source licenses: Creative Commons (majority CC BY 4.0), and Apache 2.0 respectively.
Related papers
- Evaluation of Deep Learning Semantic Segmentation for Land Cover Mapping on Multispectral, Hyperspectral and High Spatial Aerial Imagery [0.0]
In the rise of climate change, land cover mapping has become such an urgent need in environmental monitoring.
This research implemented a semantic segmentation method such as Unet, Linknet, FPN, and PSPnet for categorizing vegetation, water, and others.
The LinkNet model obtained high accuracy in IoU at 0.92 in all datasets, which is comparable with other mentioned techniques.
arXiv Detail & Related papers (2024-06-20T11:40:12Z) - Classifying geospatial objects from multiview aerial imagery using semantic meshes [2.116528763953217]
We propose a new method to predict tree species based on aerial images of forests in the U.S.
We show that our proposed multiview method improves classification accuracy from 53% to 75% relative to an orthoorthoaic baseline on a challenging cross-site tree classification task.
arXiv Detail & Related papers (2024-05-15T17:56:49Z) - PureForest: A Large-Scale Aerial Lidar and Aerial Imagery Dataset for Tree Species Classification in Monospecific Forests [0.0]
We present the PureForest dataset: a large-scale, open, multimodal dataset designed for tree species classification.
Most current public Lidar datasets for tree species classification have low diversity as they only span a small area of a few dozen annotated hectares at most.
In contrast, PureForest has 18 tree species grouped into 13 semantic classes, and spans 339 km$2$ across 449 distinct monospecific forests.
arXiv Detail & Related papers (2024-04-18T10:23:10Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Lidar-based Norwegian tree species detection using deep learning [0.36651088217486427]
We present a deep learning based tree species classification model utilizing only lidar data.
The model is trained with focal loss over partial weak labels.
Our model achieves a macro-averaged F1 score of 0.70 on an independent validation.
arXiv Detail & Related papers (2023-11-10T14:01:05Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - Very high resolution canopy height maps from RGB imagery using
self-supervised vision transformer and convolutional decoder trained on
Aerial Lidar [14.07306593230776]
This paper presents the first high-resolution canopy height map concurrently produced for multiple sub-national jurisdictions.
The maps are generated by the extraction of features from a self-supervised model trained on Maxar imagery from 2017 to 2020.
We also introduce a post-processing step using a convolutional network trained on GEDI observations.
arXiv Detail & Related papers (2023-04-14T15:52:57Z) - Bamboo: Building Mega-Scale Vision Dataset Continually with
Human-Machine Synergy [69.07918114341298]
Large-scale datasets play a vital role in computer vision.
Existing datasets are either collected according to label systems or blindly without differentiation to samples, making them inefficient and unscalable.
We advocate building a high-quality vision dataset actively annotated and continually on a comprehensive label system.
arXiv Detail & Related papers (2022-03-15T13:01:00Z) - Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Bayesian Deep Learning [74.94436509364554]
We propose a Bayesian deep learning approach to densely estimate forest structure variables at country-scale with 10-meter resolution.
Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic aperture radar images into maps of five different forest structure variables.
We train and test our model on reference data from 41 airborne laser scanning missions across Norway.
arXiv Detail & Related papers (2021-11-25T16:21:28Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.