Tree Counting by Bridging 3D Point Clouds with Imagery
- URL: http://arxiv.org/abs/2403.01932v3
- Date: Tue, 12 Mar 2024 01:12:56 GMT
- Title: Tree Counting by Bridging 3D Point Clouds with Imagery
- Authors: Lei Li, Tianfang Zhang, Zhongyu Jiang, Cheng-Yen Yang, Jenq-Neng
Hwang, Stefan Oehmcke, Dimitri Pierre Johannes Gominski, Fabian Gieseke,
Christian Igel
- Abstract summary: Two-dimensional remote sensing imagery primarily shows overstory canopy, and it does not facilitate easy differentiation of individual trees in areas with a dense canopy.
We leverage the fusion of three-dimensional LiDAR measurements and 2D imagery to facilitate the accurate counting of trees.
We compare a deep learning approach to counting trees in forests using 3D airborne LiDAR data and 2D imagery.
- Score: 31.02816235514385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and consistent methods for counting trees based on remote sensing
data are needed to support sustainable forest management, assess climate change
mitigation strategies, and build trust in tree carbon credits. Two-dimensional
remote sensing imagery primarily shows overstory canopy, and it does not
facilitate easy differentiation of individual trees in areas with a dense
canopy and does not allow for easy separation of trees when the canopy is
dense. We leverage the fusion of three-dimensional LiDAR measurements and 2D
imagery to facilitate the accurate counting of trees. We compare a deep
learning approach to counting trees in forests using 3D airborne LiDAR data and
2D imagery. The approach is compared with state-of-the-art algorithms, like
operating on 3D point cloud and 2D imagery. We empirically evaluate the
different methods on the NeonTreeCount data set, which we use to define a
tree-counting benchmark. The experiments show that FuseCountNet yields more
accurate tree counts.
Related papers
- Tree-D Fusion: Simulation-Ready Tree Dataset from Single Images with Diffusion Priors [20.607290376199813]
We introduce Tree D-fusion, featuring the first collection of 600,000 environmentally aware, 3D simulation-ready tree models.
Each reconstructed 3D tree model corresponds to an image from Google's Auto Arborist dataset.
Our method distills the scores of two tree-adapted diffusion models by utilizing text prompts to specify a tree genus.
arXiv Detail & Related papers (2024-07-14T20:56:07Z) - AdaTreeFormer: Few Shot Domain Adaptation for Tree Counting from a Single High-Resolution Image [11.649568595318307]
This paper proposes a framework that is learnt from the source domain with sufficient labeled trees.
It is adapted to the target domain with only a limited number of labeled trees.
Experimental results show that AdaTreeFormer significantly surpasses the state of the art.
arXiv Detail & Related papers (2024-02-05T12:34:03Z) - TreeFormer: a Semi-Supervised Transformer-based Framework for Tree
Counting from a Single High Resolution Image [6.789370732159176]
Tree density estimation and counting using single aerial and satellite images is a challenging task in photogrammetry and remote sensing.
We propose the first semisupervised transformer-based framework for tree counting which reduces the expensive tree annotations for remote sensing images.
Our model was evaluated on two benchmark tree counting datasets, Jiangsu, and Yosemite, as well as a new dataset, KCL-London, created by ourselves.
arXiv Detail & Related papers (2023-07-12T12:19:36Z) - Automatic Quantification and Visualization of Street Trees [29.343663350855522]
This work first explains a data collection setup carefully designed for counting roadside trees.
We then describe a unique annotation procedure aimed at robustly detecting and quantifying trees.
We propose a street tree detection, counting, and visualization framework using current object detectors and a novel yet simple counting algorithm.
arXiv Detail & Related papers (2022-01-17T18:44:46Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Individual Tree Detection and Crown Delineation with 3D Information from
Multi-view Satellite Images [5.185018253122575]
Individual tree detection and crown delineation (ITDD) are critical in forest inventory management.
We propose a ITDD method using the orthophoto and digital surface model (DSM) derived from the multi-view satellite data.
Experiments against manually marked tree plots on three representative regions have demonstrated promising results.
arXiv Detail & Related papers (2021-07-01T16:28:43Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree
Conditions [66.87405921626004]
This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation.
We propose a conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles the structural and geometric factors.
arXiv Detail & Related papers (2020-03-19T08:27:25Z) - 3D Crowd Counting via Geometric Attention-guided Multi-View Fusion [50.520192402702015]
We propose to solve the multi-view crowd counting task through 3D feature fusion with 3D scene-level density maps.
Compared to 2D fusion, the 3D fusion extracts more information of the people along the z-dimension (height), which helps to address the scale variations across multiple views.
The 3D density maps still preserve the 2D density maps property that the sum is the count, while also providing 3D information about the crowd density.
arXiv Detail & Related papers (2020-03-18T11:35:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.