Augmentation Matters: A Simple-yet-Effective Approach to Semi-supervised
Semantic Segmentation
- URL: http://arxiv.org/abs/2212.04976v1
- Date: Fri, 9 Dec 2022 16:36:52 GMT
- Title: Augmentation Matters: A Simple-yet-Effective Approach to Semi-supervised
Semantic Segmentation
- Authors: Zhen Zhao, Lihe Yang, Sifan Long, Jimin Pi, Luping Zhou, Jingdong Wang
- Abstract summary: We propose a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance.
We adopt a simplified intensity-based augmentation that selects a random number of data transformations.
We also randomly inject labelled information to augment the unlabeled samples in an adaptive manner.
- Score: 46.441263436298996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on semi-supervised semantic segmentation (SSS) have seen fast
progress. Despite their promising performance, current state-of-the-art methods
tend to increasingly complex designs at the cost of introducing more network
components and additional training procedures. Differently, in this work, we
follow a standard teacher-student framework and propose AugSeg, a simple and
clean approach that focuses mainly on data perturbations to boost the SSS
performance. We argue that various data augmentations should be adjusted to
better adapt to the semi-supervised scenarios instead of directly applying
these techniques from supervised learning. Specifically, we adopt a simplified
intensity-based augmentation that selects a random number of data
transformations with uniformly sampling distortion strengths from a continuous
space. Based on the estimated confidence of the model on different unlabeled
samples, we also randomly inject labelled information to augment the unlabeled
samples in an adaptive manner. Without bells and whistles, our simple AugSeg
can readily achieve new state-of-the-art performance on SSS benchmarks under
different partition protocols.
Related papers
- Task Consistent Prototype Learning for Incremental Few-shot Semantic Segmentation [20.49085411104439]
Incremental Few-Shot Semantic (iFSS) tackles a task that requires a model to continually expand its segmentation capability on novel classes.
This study introduces a meta-learning-based prototype approach that encourages the model to learn how to adapt quickly while preserving previous knowledge.
Experiments on iFSS datasets built upon PASCAL and COCO benchmarks show the advanced performance of the proposed approach.
arXiv Detail & Related papers (2024-10-16T23:42:27Z) - T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data [0.0]
Self-supervised learning (SSL) generally involves generating different views of the same sample and thus requires data augmentations.
In the present work, we propose a novel augmentation-free SSL method for structured data.
Our approach, T-JEPA, relies on a Joint Embedding Predictive Architecture (JEPA) and is akin to mask reconstruction in the latent space.
arXiv Detail & Related papers (2024-10-07T13:15:07Z) - Take the Bull by the Horns: Hard Sample-Reweighted Continual Training
Improves LLM Generalization [165.98557106089777]
A key challenge is to enhance the capabilities of large language models (LLMs) amid a looming shortage of high-quality training data.
Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets.
We then formalize this strategy into a principled framework of Instance-Reweighted Distributionally Robust Optimization.
arXiv Detail & Related papers (2024-02-22T04:10:57Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Implicit Counterfactual Data Augmentation for Robust Learning [24.795542869249154]
This study proposes an Implicit Counterfactual Data Augmentation method to remove spurious correlations and make stable predictions.
Experiments have been conducted across various biased learning scenarios covering both image and text datasets.
arXiv Detail & Related papers (2023-04-26T10:36:40Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Squared $\ell_2$ Norm as Consistency Loss for Leveraging Augmented Data
to Learn Robust and Invariant Representations [76.85274970052762]
Regularizing distance between embeddings/representations of original samples and augmented counterparts is a popular technique for improving robustness of neural networks.
In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings.
We show that the generic approach we identified (squared $ell$ regularized augmentation) outperforms several recent methods, which are each specially designed for one task.
arXiv Detail & Related papers (2020-11-25T22:40:09Z) - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised
Learning [4.205692673448206]
We propose a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples.
We evaluate this augmentation technique on two common semi-supervised semantic segmentation benchmarks, showing that it attains state-of-the-art results.
arXiv Detail & Related papers (2020-07-15T18:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.