Multi-task Learning for Human Settlement Extent Regression and Local
Climate Zone Classification
- URL: http://arxiv.org/abs/2011.11452v1
- Date: Mon, 23 Nov 2020 14:54:13 GMT
- Title: Multi-task Learning for Human Settlement Extent Regression and Local
Climate Zone Classification
- Authors: Chunping Qiu, Lukas Liebel, Lloyd H. Hughes, Michael Schmitt, Marco
K\"orner, and Xiao Xiang Zhu
- Abstract summary: Human Settlement Extent (HSE) and Local Climate Zone (LCZ) maps are essential sources, e.g., for sustainable urban development and Urban Heat Island (UHI) studies.
Remote sensing (RS)- and deep learning (DL)-based classification approaches play a significant role by providing the potential for global mapping.
Most of the efforts only focus on one of the two schemes, usually on a specific scale.
In this letter, the concept of multi-task learning (MTL) is introduced to HSE regression and LCZ classification for the first time.
- Score: 13.6334717951406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Settlement Extent (HSE) and Local Climate Zone (LCZ) maps are both
essential sources, e.g., for sustainable urban development and Urban Heat
Island (UHI) studies. Remote sensing (RS)- and deep learning (DL)-based
classification approaches play a significant role by providing the potential
for global mapping. However, most of the efforts only focus on one of the two
schemes, usually on a specific scale. This leads to unnecessary redundancies,
since the learned features could be leveraged for both of these related tasks.
In this letter, the concept of multi-task learning (MTL) is introduced to HSE
regression and LCZ classification for the first time. We propose a MTL
framework and develop an end-to-end Convolutional Neural Network (CNN), which
consists of a backbone network for shared feature learning, attention modules
for task-specific feature learning, and a weighting strategy for balancing the
two tasks. We additionally propose to exploit HSE predictions as a prior for
LCZ classification to enhance the accuracy. The MTL approach was extensively
tested with Sentinel-2 data of 13 cities across the world. The results
demonstrate that the framework is able to provide a competitive solution for
both tasks.
Related papers
- SCE-MAE: Selective Correspondence Enhancement with Masked Autoencoder for Self-Supervised Landmark Estimation [20.29438820908913]
Self-supervised landmark estimation is a challenging task that demands the formation of locally distinct feature representations.
We introduce SCE-MAE, a framework that operates on the vanilla feature map instead of on expensive hypercolumns.
We demonstrate through experiments that SCE-MAE is highly effective and robust, outperforming existing SOTA methods by large margins.
arXiv Detail & Related papers (2024-05-28T16:14:10Z) - Multi-Task Learning as enabler for General-Purpose AI-native RAN [1.4295558450631414]
This study explores the effectiveness of multi-task learning (MTL) approaches in facilitating a general-purpose AI native Radio Access Network (RAN)
The investigation focuses on four RAN tasks: (i) secondary carrier prediction, (ii) user location prediction, (iii) indoor link classification, and (iv) line-of-sight link classification.
We validate the performance using realistic simulations considering multi-faceted design aspects of MTL including model architecture, loss and gradient balancing strategies, distributed learning topology, data sparsity and task groupings.
arXiv Detail & Related papers (2024-04-05T21:12:25Z) - Structural Credit Assignment with Coordinated Exploration [0.0]
Methods aimed at improving structural credit assignment can generally be classified into two categories.
We propose the use of Boltzmann machines or a recurrent network for coordinated exploration.
Experimental results demonstrate that coordinated exploration significantly exceeds independent exploration in training speed.
arXiv Detail & Related papers (2023-07-25T04:55:45Z) - Knowledge Transfer-Driven Few-Shot Class-Incremental Learning [23.163459923345556]
Few-shot class-incremental learning (FSCIL) aims to continually learn new classes using a few samples while not forgetting the old classes.
Despite the advance of existing FSCIL methods, the proposed knowledge transfer learning schemes are sub-optimal due to the insufficient optimization for the model's plasticity.
We propose a Random Episode Sampling and Augmentation (RESA) strategy that relies on diverse pseudo incremental tasks as agents to achieve the knowledge transfer.
arXiv Detail & Related papers (2023-06-19T14:02:45Z) - USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text
Retrieval [115.28586222748478]
Image-Text Retrieval (ITR) aims at searching for the target instances that are semantically relevant to the given query from the other modality.
Existing approaches typically suffer from two major limitations.
arXiv Detail & Related papers (2023-01-17T12:42:58Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - PGL: Prior-Guided Local Self-supervised Learning for 3D Medical Image
Segmentation [87.50205728818601]
We propose a PriorGuided Local (PGL) self-supervised model that learns the region-wise local consistency in the latent feature space.
Our PGL model learns the distinctive representations of local regions, and hence is able to retain structural information.
arXiv Detail & Related papers (2020-11-25T11:03:11Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.