Multi-Relational Graph based Heterogeneous Multi-Task Learning in
Community Question Answering
- URL: http://arxiv.org/abs/2110.02059v1
- Date: Sat, 4 Sep 2021 03:19:20 GMT
- Title: Multi-Relational Graph based Heterogeneous Multi-Task Learning in
Community Question Answering
- Authors: Zizheng Lin, Haowen Ke, Ngo-Yin Wong, Jiaxin Bai, Yangqiu Song, Huan
Zhao, Junpeng Ye
- Abstract summary: We develop a multi-relational graph based Multi-Task Learning model called Heterogeneous Multi-Task Graph Isomorphism Network (HMTGIN)
In each training forward pass, HMTGIN embeds the input CQA forum graph by an extension of Graph Isomorphism Network and skip connections.
In the evaluation, the embeddings are shared among different task-specific output layers to make corresponding predictions.
- Score: 28.91133131424694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various data mining tasks have been proposed to study Community Question
Answering (CQA) platforms like Stack Overflow. The relatedness between some of
these tasks provides useful learning signals to each other via Multi-Task
Learning (MTL). However, due to the high heterogeneity of these tasks, few
existing works manage to jointly solve them in a unified framework. To tackle
this challenge, we develop a multi-relational graph based MTL model called
Heterogeneous Multi-Task Graph Isomorphism Network (HMTGIN) which efficiently
solves heterogeneous CQA tasks. In each training forward pass, HMTGIN embeds
the input CQA forum graph by an extension of Graph Isomorphism Network and skip
connections. The embeddings are then shared across all task-specific output
layers to compute respective losses. Moreover, two cross-task constraints based
on the domain knowledge about tasks' relationships are used to regularize the
joint learning. In the evaluation, the embeddings are shared among different
task-specific output layers to make corresponding predictions. To the best of
our knowledge, HMTGIN is the first MTL model capable of tackling CQA tasks from
the aspect of multi-relational graphs. To evaluate HMTGIN's effectiveness, we
build a novel large-scale multi-relational graph CQA dataset with over two
million nodes from Stack Overflow. Extensive experiments show that: $(1)$
HMTGIN is superior to all baselines on five tasks; $(2)$ The proposed MTL
strategy and cross-task constraints have substantial advantages.
Related papers
- InLINE: Inner-Layer Information Exchange for Multi-task Learning on Heterogeneous Graphs [13.204120407041195]
Heterogeneous graph is an important structure for modeling complex data in real-world scenarios.
We propose the Inner-Layer Information Exchange model that facilitate fine-grained information exchanges within each graph layer.
Our model effectively alleviates the significant performance drop on specific tasks caused by negative transfer.
arXiv Detail & Related papers (2024-10-29T14:46:49Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Boosting Multitask Learning on Graphs through Higher-Order Task Affinities [17.70434437597516]
Predicting node labels on a given graph is a widely studied problem with many applications, including community detection and molecular graph prediction.
This paper considers predicting multiple node labeling functions on graphs simultaneously and revisits this problem from a multitask learning perspective.
We develop an algorithm to cluster tasks into groups based on a higher-order task affinity measure.
arXiv Detail & Related papers (2023-06-24T15:53:38Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Provable Pathways: Learning Multiple Tasks over Multiple Paths [31.43753806123382]
We develop novel generalization bounds for empirical risk minimization problems learning multiple tasks over multiple paths.
In conjunction, we formalize the benefits of resulting multipath representation when adapting to new downstream tasks.
arXiv Detail & Related papers (2023-03-08T02:25:28Z) - Multimodal Subtask Graph Generation from Instructional Videos [51.96856868195961]
Real-world tasks consist of multiple inter-dependent subtasks.
In this work, we aim to model the causal dependencies between such subtasks from instructional videos describing the task.
We present Multimodal Subtask Graph Generation (MSG2), an approach that constructs a Subtask Graph defining the dependency between a task's subtasks relevant to a task from noisy web videos.
arXiv Detail & Related papers (2023-02-17T03:41:38Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - MGA-VQA: Multi-Granularity Alignment for Visual Question Answering [75.55108621064726]
Learning to answer visual questions is a challenging task since the multi-modal inputs are within two feature spaces.
We propose Multi-Granularity Alignment architecture for Visual Question Answering task (MGA-VQA)
Our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.
arXiv Detail & Related papers (2022-01-25T22:30:54Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Learning Twofold Heterogeneous Multi-Task by Sharing Similar Convolution
Kernel Pairs [24.044458897098913]
Heterogeneous multi-task learning (HMTL) is an important topic in multi-task learning (MTL)
We design a simple and effective multi-task adaptive learning (MTAL) network to learn multiple tasks in such THMTL setting.
Our model effectively performs cross-task learning while suppresses the intra-redundancy of the entire network.
arXiv Detail & Related papers (2021-01-29T06:52:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.