MTLSegFormer: Multi-task Learning with Transformers for Semantic
Segmentation in Precision Agriculture
- URL: http://arxiv.org/abs/2305.02813v1
- Date: Thu, 4 May 2023 13:19:43 GMT
- Title: MTLSegFormer: Multi-task Learning with Transformers for Semantic
Segmentation in Precision Agriculture
- Authors: Diogo Nunes Goncalves, Jose Marcato Junior, Pedro Zamboni, Hemerson
Pistori, Jonathan Li, Keiller Nogueira, Wesley Nunes Goncalves
- Abstract summary: We propose a semantic segmentation method, MTLSegFormer, which combines multi-task learning and attention mechanisms.
We tested the performance in two challenging problems with correlated tasks and observed a significant improvement in accuracy.
- Score: 16.817025300716796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning has proven to be effective in improving the performance
of correlated tasks. Most of the existing methods use a backbone to extract
initial features with independent branches for each task, and the exchange of
information between the branches usually occurs through the concatenation or
sum of the feature maps of the branches. However, this type of information
exchange does not directly consider the local characteristics of the image nor
the level of importance or correlation between the tasks. In this paper, we
propose a semantic segmentation method, MTLSegFormer, which combines multi-task
learning and attention mechanisms. After the backbone feature extraction, two
feature maps are learned for each task. The first map is proposed to learn
features related to its task, while the second map is obtained by applying
learned visual attention to locally re-weigh the feature maps of the other
tasks. In this way, weights are assigned to local regions of the image of other
tasks that have greater importance for the specific task. Finally, the two maps
are combined and used to solve a task. We tested the performance in two
challenging problems with correlated tasks and observed a significant
improvement in accuracy, mainly in tasks with high dependence on the others.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Sequential Cross Attention Based Multi-task Learning [22.430705836627148]
We propose a novel architecture that effectively transfers informative features by applying the attention mechanism to the multi-scale features of the tasks.
Our method achieves state-of-the-art performance on the NYUD-v2 and PASCAL-Context dataset.
arXiv Detail & Related papers (2022-09-06T14:17:33Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Counting with Adaptive Auxiliary Learning [23.715818463425503]
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
We develop an attention-enhanced adaptively shared backbone network to enable both task-shared and task-tailored features learning.
Our method achieves superior performance to the state-of-the-art auxiliary task learning based counting methods.
arXiv Detail & Related papers (2022-03-08T13:10:17Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning [82.62433731378455]
We show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales.
We propose a novel architecture, namely MTI-Net, that builds upon this finding.
arXiv Detail & Related papers (2020-01-19T21:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.