PRCL: Probabilistic Representation Contrastive Learning for
Semi-Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2402.18117v1
- Date: Wed, 28 Feb 2024 07:10:37 GMT
- Title: PRCL: Probabilistic Representation Contrastive Learning for
Semi-Supervised Semantic Segmentation
- Authors: Haoyu Xie, Changqi Wang, Jian Zhao, Yang Liu, Jun Dan, Chong Fu,
Baigui Sun
- Abstract summary: We propose a robust contrastive-based S4 framework, termed the Probabilistic Representation Contrastive Learning (PRCL) framework to enhance the robustness of the unsupervised training process.
- Score: 15.869077228828303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tremendous breakthroughs have been developed in Semi-Supervised Semantic
Segmentation (S4) through contrastive learning. However, due to limited
annotations, the guidance on unlabeled images is generated by the model itself,
which inevitably exists noise and disturbs the unsupervised training process.
To address this issue, we propose a robust contrastive-based S4 framework,
termed the Probabilistic Representation Contrastive Learning (PRCL) framework
to enhance the robustness of the unsupervised training process. We model the
pixel-wise representation as Probabilistic Representations (PR) via
multivariate Gaussian distribution and tune the contribution of the ambiguous
representations to tolerate the risk of inaccurate guidance in contrastive
learning. Furthermore, we introduce Global Distribution Prototypes (GDP) by
gathering all PRs throughout the whole training process. Since the GDP contains
the information of all representations with the same class, it is robust from
the instant noise in representations and bears the intra-class variance of
representations. In addition, we generate Virtual Negatives (VNs) based on GDP
to involve the contrastive learning process. Extensive experiments on two
public benchmarks demonstrate the superiority of our PRCL framework.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Unsupervised Representation Learning by Balanced Self Attention Matching [2.3020018305241337]
We present a self-supervised method for embedding image features called BAM.
We obtain rich representations and avoid feature collapse by minimizing a loss that matches these distributions to their globally balanced and entropy regularized version.
We show competitive performance with leading methods on both semi-supervised and transfer-learning benchmarks.
arXiv Detail & Related papers (2024-08-04T12:52:44Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - A Distributional Analogue to the Successor Representation [54.99439648059807]
This paper contributes a new approach for distributional reinforcement learning.
It elucidates a clean separation of transition structure and reward in the learning process.
As an illustration, we show that it enables zero-shot risk-sensitive policy evaluation.
arXiv Detail & Related papers (2024-02-13T15:35:24Z) - Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations [3.8980564330208662]
We propose a model that assumes that the attribute responsible for the shift is unknown in advance.
We show that our approach improves generalization on diverse class distributions in both simulations and real-world datasets.
arXiv Detail & Related papers (2023-11-30T14:14:31Z) - Boosting Semi-Supervised Semantic Segmentation with Probabilistic
Representations [30.672426195148496]
We propose a Probabilistic Representation Contrastive Learning framework to improve representation quality.
We define pixel-wise representations from a new perspective of probability theory.
We also propose to regularize the distribution variance to enhance the reliability of representations.
arXiv Detail & Related papers (2022-10-26T12:47:29Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Generalized Zero-Shot Learning Via Over-Complete Distribution [79.5140590952889]
We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
arXiv Detail & Related papers (2020-04-01T19:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.