Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural
Network
- URL: http://arxiv.org/abs/2002.04813v1
- Date: Wed, 12 Feb 2020 06:02:20 GMT
- Title: Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural
Network
- Authors: Pengxin Guo, Chang Deng, Linjie Xu, Xiaonan Huang, Yu Zhang
- Abstract summary: We propose a Hierarchical Graph Neural Network to learn augmented features for deep multi-task learning.
Experiments on real-world datastes show the significant performance improvement when using this strategy.
- Score: 4.121467410954028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep multi-task learning attracts much attention in recent years as it
achieves good performance in many applications. Feature learning is important
to deep multi-task learning for sharing common information among tasks. In this
paper, we propose a Hierarchical Graph Neural Network (HGNN) to learn augmented
features for deep multi-task learning. The HGNN consists of two-level graph
neural networks. In the low level, an intra-task graph neural network is
responsible of learning a powerful representation for each data point in a task
by aggregating its neighbors. Based on the learned representation, a task
embedding can be generated for each task in a similar way to max pooling. In
the second level, an inter-task graph neural network updates task embeddings of
all the tasks based on the attention mechanism to model task relations. Then
the task embedding of one task is used to augment the feature representation of
data points in this task. Moreover, for classification tasks, an inter-class
graph neural network is introduced to conduct similar operations on a finer
granularity, i.e., the class level, to generate class embeddings for each class
in all the tasks use class embeddings to augment the feature representation.
The proposed feature augmentation strategy can be used in many deep multi-task
learning models. we analyze the HGNN in terms of training and generalization
losses. Experiments on real-world datastes show the significant performance
improvement when using this strategy.
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Multi-task Self-supervised Graph Neural Networks Enable Stronger Task
Generalization [40.265515914447924]
Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the machine learning community in recent years.
One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy.
arXiv Detail & Related papers (2022-10-05T04:09:38Z) - Backbones-Review: Feature Extraction Networks for Deep Learning and Deep
Reinforcement Learning Approaches [3.255610188565679]
CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task.
Many networks have been proposed and become the famous networks used for any DL models in any AI task.
A backbone is a known network trained in many other tasks before and demonstrates its effectiveness.
arXiv Detail & Related papers (2022-06-16T09:18:34Z) - Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach [5.629161809575013]
We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
arXiv Detail & Related papers (2022-01-10T12:58:46Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning [82.62433731378455]
We show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales.
We propose a novel architecture, namely MTI-Net, that builds upon this finding.
arXiv Detail & Related papers (2020-01-19T21:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.