Histogram of Oriented Gradients Meet Deep Learning: A Novel Multi-task
Deep Network for Medical Image Semantic Segmentation
- URL: http://arxiv.org/abs/2204.01712v1
- Date: Sat, 2 Apr 2022 23:50:29 GMT
- Title: Histogram of Oriented Gradients Meet Deep Learning: A Novel Multi-task
Deep Network for Medical Image Semantic Segmentation
- Authors: Binod Bhattarai, Ronast Subedi, Rebati Raman Gaire, Eduard Vazquez,
Danail Stoyanov
- Abstract summary: We present our novel deep multi-task learning method for medical image segmentation.
We generate the pseudo-labels of an auxiliary task in an unsupervised manner.
Our method consistently improves the performance compared to the counter-part method.
- Score: 18.066680957993494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present our novel deep multi-task learning method for medical image
segmentation. Existing multi-task methods demand ground truth annotations for
both the primary and auxiliary tasks. Contrary to it, we propose to generate
the pseudo-labels of an auxiliary task in an unsupervised manner. To generate
the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of
the most widely used and powerful hand-crafted features for detection. Together
with the ground truth semantic segmentation masks for the primary task and
pseudo-labels for the auxiliary task, we learn the parameters of the deep
network to minimise the loss of both the primary task and the auxiliary task
jointly. We employed our method on two powerful and widely used semantic
segmentation networks: UNet and U2Net to train in a multi-task setup. To
validate our hypothesis, we performed experiments on two different medical
image segmentation data sets. From the extensive quantitative and qualitative
results, we observe that our method consistently improves the performance
compared to the counter-part method. Moreover, our method is the winner of
FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction
with MICCAI 2021.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Two-Stage Multi-task Self-Supervised Learning for Medical Image
Segmentation [1.5863809575305416]
Medical image segmentation has been significantly advanced by deep learning (DL) techniques.
The data scarcity inherent in medical applications poses a great challenge to DL-based segmentation methods.
arXiv Detail & Related papers (2024-02-11T07:49:35Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Scribble-Supervised Medical Image Segmentation via Dual-Branch Network
and Dynamically Mixed Pseudo Labels Supervision [15.414578073908906]
We propose a simple yet efficient scribble-supervised image segmentation method and apply it to cardiac MRI segmentation.
By combining the scribble supervision and auxiliary pseudo labels supervision, the dual-branch network can efficiently learn from scribble annotations end-to-end.
arXiv Detail & Related papers (2022-03-04T02:50:30Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Dual-Task Mutual Learning for Semi-Supervised Medical Image Segmentation [12.940103904327655]
We propose a novel dual-task mutual learning framework for semi-supervised medical image segmentation.
Our framework can be formulated as an integration of two individual segmentation networks based on two tasks.
By jointly learning the segmentation probability maps and signed distance maps of targets, our framework can enforce the geometric shape constraint and learn more reliable information.
arXiv Detail & Related papers (2021-03-08T12:38:23Z) - Semi-supervised Medical Image Segmentation through Dual-task Consistency [18.18484640332254]
We propose a novel dual-task deep network that jointly predicts a pixel-wise segmentation map and a geometry-aware level set representation of the target.
Our method can largely improve the performance by incorporating the unlabeled data.
Our framework outperforms the state-of-the-art semi-supervised medical image segmentation methods.
arXiv Detail & Related papers (2020-09-09T17:49:21Z) - HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task
U-Net for Accurate Prostate Segmentation [56.86396352441269]
We tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate.
To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary.
By contrast, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificities of different tasks are
arXiv Detail & Related papers (2020-05-21T02:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.