MulGT: Multi-task Graph-Transformer with Task-aware Knowledge Injection
and Domain Knowledge-driven Pooling for Whole Slide Image Analysis
- URL: http://arxiv.org/abs/2302.10574v3
- Date: Thu, 30 Mar 2023 08:51:05 GMT
- Title: MulGT: Multi-task Graph-Transformer with Task-aware Knowledge Injection
and Domain Knowledge-driven Pooling for Whole Slide Image Analysis
- Authors: Weiqin Zhao, Shujun Wang, Maximus Yeung, Tianye Niu, Lequan Yu
- Abstract summary: Whole slide image (WSI) has been widely used to assist automated diagnosis under the deep learning fields.
We present a novel multi-task framework (i.e., MulGT) for WSI analysis by the specially designed Graph-Transformer.
- Score: 17.098951643252345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole slide image (WSI) has been widely used to assist automated diagnosis
under the deep learning fields. However, most previous works only discuss the
SINGLE task setting which is not aligned with real clinical setting, where
pathologists often conduct multiple diagnosis tasks simultaneously. Also, it is
commonly recognized that the multi-task learning paradigm can improve learning
efficiency by exploiting commonalities and differences across multiple tasks.
To this end, we present a novel multi-task framework (i.e., MulGT) for WSI
analysis by the specially designed Graph-Transformer equipped with Task-aware
Knowledge Injection and Domain Knowledge-driven Graph Pooling modules.
Basically, with the Graph Neural Network and Transformer as the building
commons, our framework is able to learn task-agnostic low-level local
information as well as task-specific high-level global representation.
Considering that different tasks in WSI analysis depend on different features
and properties, we also design a novel Task-aware Knowledge Injection module to
transfer the task-shared graph embedding into task-specific feature spaces to
learn more accurate representation for different tasks. Further, we elaborately
design a novel Domain Knowledge-driven Graph Pooling module for each task to
improve both the accuracy and robustness of different tasks by leveraging
different diagnosis patterns of multiple tasks. We evaluated our method on two
public WSI datasets from TCGA projects, i.e., esophageal carcinoma and kidney
carcinoma. Experimental results show that our method outperforms single-task
counterparts and the state-of-theart methods on both tumor typing and staging
tasks.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Understanding and Improving Information Transfer in Multi-Task Learning [14.43111978531182]
We study an architecture with a shared module for all tasks and a separate output module for each task.
We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer.
Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning.
arXiv Detail & Related papers (2020-05-02T23:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.