Iterative Loop Learning Combining Self-Training and Active Learning for
Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2301.13361v1
- Date: Tue, 31 Jan 2023 01:31:43 GMT
- Title: Iterative Loop Learning Combining Self-Training and Active Learning for
Domain Adaptive Semantic Segmentation
- Authors: Licong Guan, Xue Yuan
- Abstract summary: Self-training and active learning have been proposed to alleviate this problem.
This paper proposes an iterative loop learning method combining Self-Training and Active Learning.
- Score: 1.827510863075184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, self-training and active learning have been proposed to alleviate
this problem. Self-training can improve model accuracy with massive unlabeled
data, but some pseudo labels containing noise would be generated with limited
or imbalanced training data. And there will be suboptimal models if human
guidance is absent. Active learning can select more effective data to
intervene, while the model accuracy can not be improved because the massive
unlabeled data are not used. And the probability of querying sub-optimal
samples will increase when the domain difference is too large, increasing
annotation cost. This paper proposes an iterative loop learning method
combining Self-Training and Active Learning (STAL) for domain adaptive semantic
segmentation. The method first uses self-training to learn massive unlabeled
data to improve model accuracy and provide more accurate selection models for
active learning. Secondly, combined with the sample selection strategy of
active learning, manual intervention is used to correct the self-training
learning. Iterative loop to achieve the best performance with minimal label
cost. Extensive experiments show that our method establishes state-of-the-art
performance on tasks of GTAV to Cityscapes, SYNTHIA to Cityscapes, improving by
4.9% mIoU and 5.2% mIoU, compared to the previous best method, respectively.
Code will be available.
Related papers
- Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models [3.546617486894182]
We introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks.
Results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets.
arXiv Detail & Related papers (2024-06-13T15:06:11Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Zero-shot Active Learning Using Self Supervised Learning [11.28415437676582]
We propose a new Active Learning approach which is model agnostic as well as one doesn't require an iterative process.
We aim to leverage self-supervised learnt features for the task of Active Learning.
arXiv Detail & Related papers (2024-01-03T11:49:07Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - BaSAL: Size-Balanced Warm Start Active Learning for LiDAR Semantic
Segmentation [2.9290232815049926]
Existing active learning methods overlook the severe class imbalance inherent in LiDAR semantic segmentation datasets.
We propose BaSAL, a size-balanced warm start active learning model, based on the observation that each object class has a characteristic size.
Results show that we are able to improve the performance of the initial model by a large margin.
arXiv Detail & Related papers (2023-10-12T05:03:19Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Active Learning with Combinatorial Coverage [0.0]
Active learning is a practical field of machine learning that automates the process of selecting which data to label.
Current methods are effective in reducing the burden of data labeling but are heavily model-reliant.
This has led to the inability of sampled data to be transferred to new models as well as issues with sampling bias.
We propose active learning methods utilizing coverage to overcome these issues.
arXiv Detail & Related papers (2023-02-28T13:43:23Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Minority Class Oriented Active Learning for Imbalanced Datasets [6.009262446889319]
We introduce a new active learning method which is designed for imbalanced datasets.
It favors samples likely to be in minority classes so as to reduce the imbalance of the labeled subset.
We also compare two training schemes for active learning.
arXiv Detail & Related papers (2022-02-01T13:13:41Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.