Contrastive pretraining for semantic segmentation is robust to noisy
positive pairs
- URL: http://arxiv.org/abs/2211.13756v1
- Date: Thu, 24 Nov 2022 18:59:01 GMT
- Title: Contrastive pretraining for semantic segmentation is robust to noisy
positive pairs
- Authors: Sebastian Gerard (KTH Royal Institute of Technology, Stockholm,
Sweden), Josephine Sullivan (KTH Royal Institute of Technology, Stockholm,
Sweden)
- Abstract summary: Domain-specific variants of contrastive learning can construct positive pairs from two distinct images.
We find that downstream semantic segmentation is either robust to the noisy pairs or even benefits from them.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain-specific variants of contrastive learning can construct positive pairs
from two distinct images, as opposed to augmenting the same image twice. Unlike
in traditional contrastive methods, this can result in positive pairs not
matching perfectly. Similar to false negative pairs, this could impede model
performance. Surprisingly, we find that downstream semantic segmentation is
either robust to the noisy pairs or even benefits from them. The experiments
are conducted on the remote sensing dataset xBD, and a synthetic segmentation
dataset, on which we have full control over the noise parameters. As a result,
practitioners should be able to use such domain-specific contrastive methods
without having to filter their positive pairs beforehand.
Related papers
- Rethinking Positive Pairs in Contrastive Learning [19.149235307036324]
We present Hydra, a universal contrastive learning framework for visual representations that extends conventional contrastive learning to accommodate arbitrary pairs.
Our approach is validated using IN1K, where 1K diverse classes compose 500,500 pairs, most of them being distinct.
Our work highlights the value of learning common features of arbitrary pairs and potentially broadens the applicability of contrastive learning techniques on the sample pairs with weak relationships.
arXiv Detail & Related papers (2024-10-23T18:07:18Z) - Semantic-aware Contrastive Learning for More Accurate Semantic Parsing [32.74456368167872]
We propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations.
Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines.
arXiv Detail & Related papers (2023-01-19T07:04:32Z) - Learning by Sorting: Self-supervised Learning with Group Ordering
Constraints [75.89238437237445]
This paper proposes a new variation of the contrastive learning objective, Group Ordering Constraints (GroCo)
It exploits the idea of sorting the distances of positive and negative pairs and computing the respective loss based on how many positive pairs have a larger distance than the negative pairs, and thus are not ordered correctly.
We evaluate the proposed formulation on various self-supervised learning benchmarks and show that it not only leads to improved results compared to vanilla contrastive learning but also shows competitive performance to comparable methods in linear probing and outperforms current methods in k-NN performance.
arXiv Detail & Related papers (2023-01-05T11:17:55Z) - MarginNCE: Robust Sound Localization with a Negative Margin [23.908770938403503]
The goal of this work is to localize sound sources in visual scenes with a self-supervised approach.
We show that using a less strict decision boundary in contrastive learning can alleviate the effect of noisy correspondences in sound source localization.
arXiv Detail & Related papers (2022-11-03T16:44:14Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Contrasting quadratic assignments for set-based representation learning [5.142415132534397]
standard approach to contrastive learning is to maximize the agreement between different views of the data.
In this work, we note that the approach of considering individual pairs cannot account for both intra-set and inter-set similarities.
We propose to go beyond contrasting individual pairs of objects by focusing on contrasting objects as sets.
arXiv Detail & Related papers (2022-05-31T14:14:36Z) - Modulated Contrast for Versatile Image Synthesis [60.304183493234376]
MoNCE is a versatile metric that introduces image contrast to learn a calibrated metric for the perception of multifaceted inter-image distances.
We introduce optimal transport in MoNCE to modulate the pushing force of negative samples collaboratively across multiple contrastive objectives.
arXiv Detail & Related papers (2022-03-17T14:03:46Z) - Learning Sound Localization Better From Semantically Similar Samples [79.47083330766002]
Existing audio-visual works employ contrastive learning by assigning corresponding audio-visual pairs from the same source as positives while randomly mismatched pairs as negatives.
Our key contribution is showing that hard positives can give similar response maps to the corresponding pairs.
We demonstrate the effectiveness of our approach on VGG-SS and SoundNet-Flickr test sets.
arXiv Detail & Related papers (2022-02-07T08:53:55Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.