Multi-task fully convolutional network for tree species mapping in dense
forests using small training hyperspectral data
- URL: http://arxiv.org/abs/2106.00799v1
- Date: Tue, 1 Jun 2021 21:10:10 GMT
- Title: Multi-task fully convolutional network for tree species mapping in dense
forests using small training hyperspectral data
- Authors: Laura Elena Cu\'e La Rosa, Camile Sothe, Raul Queiroz Feitosa,
Cl\'audia Maria de Almeida, Marcos Benedito Schimalski, Dario Augusto Borges
Oliveira
- Abstract summary: This work proposes a multi-task fully convolutional architecture for tree species mapping in dense forests.
Our model implements a partial loss function that enables dense tree semantic labeling outcomes from non-dense training samples.
A distance regression complementary task that enforces tree crown boundary constraints substantially improves the model performance.
- Score: 0.9646880519252493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a multi-task fully convolutional architecture for tree
species mapping in dense forests from sparse and scarce polygon-level
annotations using hyperspectral UAV-borne data. Our model implements a partial
loss function that enables dense tree semantic labeling outcomes from non-dense
training samples, and a distance regression complementary task that enforces
tree crown boundary constraints and substantially improves the model
performance. Our multi-task architecture uses a shared backbone network that
learns common representations for both tasks and two task-specific decoders,
one for the semantic segmentation output and one for the distance map
regression. We report that introducing the complementary task boosts the
semantic segmentation performance compared to the single-task counterpart in up
to 10% reaching an overall F1 score of 87.5% and an overall accuracy of 85.9%,
achieving state-of-art performance for tree species classification in tropical
forests.
Related papers
- Scalable Multitask Learning Using Gradient-based Estimation of Task Affinity [16.643892206707854]
Grad-TAG can estimate task affinities without repeatedly training on data from various task combinations.
We show that Grad-TAG achieves excellent performance and runtime tradeoffs compared to existing approaches.
arXiv Detail & Related papers (2024-09-09T21:59:27Z) - GrootVL: Tree Topology is All You Need in State Space Model [66.36757400689281]
GrootVL is a versatile multimodal framework that can be applied to both visual and textual tasks.
Our method significantly outperforms existing structured state space models on image classification, object detection and segmentation.
By fine-tuning large language models, our approach achieves consistent improvements in multiple textual tasks at minor training cost.
arXiv Detail & Related papers (2024-06-04T15:09:29Z) - LIGHT: Joint Individual Building Extraction and Height Estimation from
Satellite Images through a Unified Multitask Learning Network [8.09909901104654]
Building extraction and height estimation are two important basic tasks in remote sensing image interpretation.
Most of the existing research regards the two tasks as independent studies.
In this work, we combine the individuaL buIlding extraction and heiGHt estimation through a unified multiTask learning network.
arXiv Detail & Related papers (2023-04-03T15:48:24Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Empirical Study of Multi-Task Hourglass Model for Semantic Segmentation
Task [0.7614628596146599]
We propose to use a multi-task approach by complementing the semantic segmentation task with edge detection, semantic contour, and distance transform tasks.
We demonstrate the effectiveness of learning in a multi-task setting for hourglass models in the Cityscapes, CamVid, and Freiburg Forest datasets.
arXiv Detail & Related papers (2021-05-28T01:08:10Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Instance segmentation of fallen trees in aerial color infrared imagery
using active multi-contour evolution with fully convolutional network-based
intensity priors [0.5276232626689566]
We introduce a framework for segmenting instances of a common object class by multiple active contour evolution over semantic segmentation maps of images.
We instantiate the proposed framework in the context of segmenting individual fallen stems from high-resolution aerial multispectral imagery.
arXiv Detail & Related papers (2021-05-05T11:54:05Z) - SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from
Monocular images [94.36401543589523]
We introduce the concept of semantic objectness to exploit the geometric relationship of these two tasks.
We then propose a Semantic Object and Depth Estimation Network (SOSD-Net) based on the objectness assumption.
To the best of our knowledge, SOSD-Net is the first network that exploits the geometry constraint for simultaneous monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2021-01-19T02:41:03Z) - Semantic Segmentation With Multi Scale Spatial Attention For Self
Driving Cars [2.7317088388886384]
We present a novel neural network using multi scale feature fusion at various scales for accurate and efficient semantic image segmentation.
We used ResNet based feature extractor, dilated convolutional layers in downsampling part, atrous convolutional layers in the upsampling part and used concat operation to merge them.
A new attention module is proposed to encode more contextual information and enhance the receptive field of the network.
arXiv Detail & Related papers (2020-06-30T20:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.