FinnWoodlands Dataset
- URL: http://arxiv.org/abs/2304.00793v1
- Date: Mon, 3 Apr 2023 08:28:13 GMT
- Title: FinnWoodlands Dataset
- Authors: Juan Lagos, Urho Lempi\"o and Esa Rahtu
- Abstract summary: textitFinns comprises a total of 4226 objects manually annotated, out of which 2562 objects correspond to tree trunks.
Our dataset can be used in forestry applications where a holistic representation of the environment is relevant.
- Score: 12.386304516106852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the availability of large and diverse datasets has contributed to
significant breakthroughs in autonomous driving and indoor applications,
forestry applications are still lagging behind and new forest datasets would
most certainly contribute to achieving significant progress in the development
of data-driven methods for forest-like scenarios. This paper introduces a
forest dataset called \textit{FinnWoodlands}, which consists of RGB stereo
images, point clouds, and sparse depth maps, as well as ground truth manual
annotations for semantic, instance, and panoptic segmentation.
\textit{FinnWoodlands} comprises a total of 4226 objects manually annotated,
out of which 2562 objects (60.6\%) correspond to tree trunks classified into
three different instance categories, namely "Spruce Tree", "Birch Tree", and
"Pine Tree". Besides tree trunks, we also annotated "Obstacles" objects as
instances as well as the semantic stuff classes "Lake", "Ground", and "Track".
Our dataset can be used in forestry applications where a holistic
representation of the environment is relevant. We provide an initial benchmark
using three models for instance segmentation, panoptic segmentation, and depth
completion, and illustrate the challenges that such unstructured scenarios
introduce.
Related papers
- ForestFormer3D: A Unified Framework for End-to-End Segmentation of Forest LiDAR 3D Point Clouds [0.06282171844772422]
We present ForestFormer3D, a new unified and end-to-end framework for precise individual tree and semantic segmentation.<n>ForestFormer3D incorporates ISA-guided query point selection, a score-based block merging strategy during inference, and a one-to-many association mechanism for effective training.<n>Our model achieves state-of-the-art performance for individual tree segmentation on the newly introduced FOR-instanceV2 dataset.
arXiv Detail & Related papers (2025-06-20T13:39:27Z) - Not Every Tree Is a Forest: Benchmarking Forest Types from Satellite Remote Sensing [1.2266381182650026]
This work introduces ForTy, a benchmark for global-scale FORest TYpes mapping using multi-temporal satellite data.<n>The benchmark comprises 200,000 time series of image patches, each consisting of Sentinel-2, Sentinel-1, climate, and elevation data.<n>We evaluate the forest types dataset using several baseline models, including convolution neural networks and transformer-based models.
arXiv Detail & Related papers (2025-05-03T12:20:50Z) - Open-Vocabulary Octree-Graph for 3D Scene Understanding [54.11828083068082]
Octree-Graph is a novel scene representation for open-vocabulary 3D scene understanding.
An adaptive-octree structure is developed that stores semantics and depicts the occupancy of an object adjustably according to its shape.
arXiv Detail & Related papers (2024-11-25T10:14:10Z) - Forest Inspection Dataset for Aerial Semantic Segmentation and Depth
Estimation [6.635604919499181]
We introduce a new large aerial dataset for forest inspection.
It contains both real-world and virtual recordings of natural environments.
We develop a framework to assess the deforestation degree of an area.
arXiv Detail & Related papers (2024-03-11T11:26:44Z) - AdaTreeFormer: Few Shot Domain Adaptation for Tree Counting from a Single High-Resolution Image [11.649568595318307]
This paper proposes a framework that is learnt from the source domain with sufficient labeled trees.
It is adapted to the target domain with only a limited number of labeled trees.
Experimental results show that AdaTreeFormer significantly surpasses the state of the art.
arXiv Detail & Related papers (2024-02-05T12:34:03Z) - Automated forest inventory: analysis of high-density airborne LiDAR
point clouds with 3D deep learning [16.071397465972893]
ForAINet is able to perform a segmentation across diverse forest types and geographic regions.
System has been tested on FOR-Instance, a dataset of point clouds that have been acquired in five different countries using surveying drones.
arXiv Detail & Related papers (2023-12-22T21:54:35Z) - TreeLearn: A Comprehensive Deep Learning Method for Segmenting
Individual Trees from Ground-Based LiDAR Forest Point Clouds [42.87502453001109]
We propose TreeLearn, a deep learning-based approach for tree instance segmentation of forest point clouds.
TreeLearn is trained on already segmented point clouds in a data-driven manner, making it less reliant on predefined features and algorithms.
We trained TreeLearn on forest point clouds of 6665 trees, labeled using the Lidar360 software.
arXiv Detail & Related papers (2023-09-15T15:20:16Z) - FOR-instance: a UAV laser scanning benchmark dataset for semantic and
instance segmentation of individual trees [0.06597195879147556]
FOR-instance dataset comprises five curated and ML-ready UAV-based laser scanning data collections.
The dataset is divided into development and test subsets, enabling method advancement and evaluation.
The inclusion of diameter at breast height data expands its utility to the measurement of a classic tree variable.
arXiv Detail & Related papers (2023-09-03T22:08:29Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - PhraseCut: Language-based Image Segmentation in the Wild [62.643450401286]
We consider the problem of segmenting image regions given a natural language phrase.
Our dataset is collected on top of the Visual Genome dataset.
Our experiments show that the scale and diversity of concepts in our dataset poses significant challenges to the existing state-of-the-art.
arXiv Detail & Related papers (2020-08-03T20:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.