Two-Stage Multi-task Self-Supervised Learning for Medical Image
Segmentation
- URL: http://arxiv.org/abs/2402.07119v1
- Date: Sun, 11 Feb 2024 07:49:35 GMT
- Title: Two-Stage Multi-task Self-Supervised Learning for Medical Image
Segmentation
- Authors: Binyan Hu and A. K. Qin
- Abstract summary: Medical image segmentation has been significantly advanced by deep learning (DL) techniques.
The data scarcity inherent in medical applications poses a great challenge to DL-based segmentation methods.
- Score: 1.5863809575305416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation has been significantly advanced by deep learning
(DL) techniques, though the data scarcity inherent in medical applications
poses a great challenge to DL-based segmentation methods. Self-supervised
learning offers a solution by creating auxiliary learning tasks from the
available dataset and then leveraging the knowledge acquired from solving
auxiliary tasks to help better solve the target segmentation task. Different
auxiliary tasks may have different properties and thus can help the target task
to different extents. It is desired to leverage their complementary advantages
to enhance the overall assistance to the target task. To achieve this, existing
methods often adopt a joint training paradigm, which co-solves segmentation and
auxiliary tasks by integrating their losses or intermediate gradients. However,
direct coupling of losses or intermediate gradients risks undesirable
interference because the knowledge acquired from solving each auxiliary task at
every training step may not always benefit the target task. To address this
issue, we propose a two-stage training approach. In the first stage, the target
segmentation task will be independently co-solved with each auxiliary task in
both joint training and pre-training modes, with the better model selected via
validation performance. In the second stage, the models obtained with respect
to each auxiliary task are converted into a single model using an ensemble
knowledge distillation method. Our approach allows for making best use of each
auxiliary task to create multiple elite segmentation models and then combine
them into an even more powerful model. We employed five auxiliary tasks of
different proprieties in our approach and applied it to train the U-Net model
on an X-ray pneumothorax segmentation dataset. Experimental results demonstrate
the superiority of our approach over several existing methods.
Related papers
- Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - TBGC: Task-level Backbone-Oriented Gradient Clip for Multi-Task
Foundation Model Learning [0.0]
We propose the task-level backbone-oriented gradient clip paradigm, compared with the vanilla gradient clip method.
Based on the experimental results, we argue that the task-level backbone-oriented gradient clip paradigm can relieve the gradient bias problem to some extent.
Our approach has been shown to be effective and finally achieve 1st place in the Leaderboard A and 2nd place in the Leaderboard B of the CVPR2023 Foundation Model Challenge.
arXiv Detail & Related papers (2023-07-07T08:57:57Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Histogram of Oriented Gradients Meet Deep Learning: A Novel Multi-task
Deep Network for Medical Image Semantic Segmentation [18.066680957993494]
We present our novel deep multi-task learning method for medical image segmentation.
We generate the pseudo-labels of an auxiliary task in an unsupervised manner.
Our method consistently improves the performance compared to the counter-part method.
arXiv Detail & Related papers (2022-04-02T23:50:29Z) - Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors [52.37504333689262]
Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
arXiv Detail & Related papers (2021-12-02T08:40:42Z) - Conflict-Averse Gradient Descent for Multi-task Learning [56.379937772617]
A major challenge in optimizing a multi-task model is the conflicting gradients.
We introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function.
CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss.
arXiv Detail & Related papers (2021-10-26T22:03:51Z) - Adaptive Transfer Learning on Graph Neural Networks [4.233435459239147]
Graph neural networks (GNNs) are widely used to learn a powerful representation of graph-structured data.
Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation.
We propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task.
arXiv Detail & Related papers (2021-07-19T11:46:28Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.