Multi-Task Self-Supervised Learning for Image Segmentation Task
- URL: http://arxiv.org/abs/2302.02483v1
- Date: Sun, 5 Feb 2023 21:25:59 GMT
- Title: Multi-Task Self-Supervised Learning for Image Segmentation Task
- Authors: Lichun Gao, Chinmaya Khamesra, Uday Kumbhar, Ashay Aglawe
- Abstract summary: The paper presents 1. Self-supervised techniques to boost semantic segmentation performance using multi-task learning with Depth prediction and Surface Normalization.
2. Performance evaluation of the different types of weighing techniques (UW, Nash-MTL) used for Multi-task learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to breakthroughs in AI and Deep learning methodology, Computer vision
techniques are rapidly improving. Most computer vision applications require
sophisticated image segmentation to comprehend what is image and to make an
analysis of each section easier. Training deep learning networks for semantic
segmentation required a large amount of annotated data, which presents a major
challenge in practice as it is expensive and labor-intensive to produce such
data. The paper presents 1. Self-supervised techniques to boost semantic
segmentation performance using multi-task learning with Depth prediction and
Surface Normalization . 2. Performance evaluation of the different types of
weighing techniques (UW, Nash-MTL) used for Multi-task learning. NY2D dataset
was used for performance evaluation. According to our evaluation, the Nash-MTL
method outperforms single task learning(Semantic Segmentation).
Related papers
- M3: A Multi-Task Mixed-Objective Learning Framework for Open-Domain Multi-Hop Dense Sentence Retrieval [12.277521531556852]
M3 is a novel Multi-hop dense sentence retrieval system built upon a novel Multi-task Mixed-objective approach for dense text representation learning.
Our approach yields state-of-the-art performance on a large-scale open-domain fact verification benchmark dataset, FEVER.
arXiv Detail & Related papers (2024-03-21T01:52:07Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Multi-task learning from fixed-wing UAV images for 2D/3D city modeling [0.0]
Multi-task learning is an approach to scene understanding which involves multiple related tasks each with potentially limited training data.
In urban management applications such as infrastructure development, traffic monitoring, smart 3D cities, and change detection, automated multi-task data analysis is required.
In this study, a common framework for the performance assessment of multi-task learning methods from fixed-wing UAV images for 2D/3D city modeling is presented.
arXiv Detail & Related papers (2021-08-25T14:45:42Z) - Large-scale Unsupervised Semantic Segmentation [163.3568726730319]
We propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to track the research progress.
Based on the ImageNet dataset, we propose the ImageNet-S dataset with 1.2 million training images and 40k high-quality semantic segmentation annotations for evaluation.
arXiv Detail & Related papers (2021-06-06T15:02:11Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z) - A Survey on Deep Learning Methods for Semantic Image Segmentation in
Real-Time [0.0]
In many areas, such as robotics and autonomous vehicles, semantic image segmentation is crucial.
The success of medical diagnosis and treatment relies on the extremely accurate understanding of the data under consideration.
Recent developments in deep learning have provided a host of tools to tackle this problem efficiently and with increased accuracy.
arXiv Detail & Related papers (2020-09-27T20:30:10Z) - Multi-Task Learning with Deep Neural Networks: A Survey [0.0]
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.
We give an overview of multi-task learning methods for deep neural networks, with the aim of summarizing both the well-established and most recent directions within the field.
arXiv Detail & Related papers (2020-09-10T19:31:04Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.