Information-guided pixel augmentation for pixel-wise contrastive
  learning
        - URL: http://arxiv.org/abs/2211.07118v1
 - Date: Mon, 14 Nov 2022 05:12:23 GMT
 - Title: Information-guided pixel augmentation for pixel-wise contrastive
  learning
 - Authors: Quan Quan and Qingsong Yao and Jun Li and S.kevin Zhou
 - Abstract summary: pixel-wise contrastive learning helps with pixel-wise tasks such as medical landmark detection.
We propose a pixel augmentation method with a pixel granularity for enhancing unsupervised pixel-wise contrastive learning.
 - Score: 22.00687816406677
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Contrastive learning (CL) is a form of self-supervised learning and has been
widely used for various tasks. Different from widely studied instance-level
contrastive learning, pixel-wise contrastive learning mainly helps with
pixel-wise tasks such as medical landmark detection. The counterpart to an
instance in instance-level CL is a pixel, along with its neighboring context,
in pixel-wise CL. Aiming to build better feature representation, there is a
vast literature about designing instance augmentation strategies for
instance-level CL; but there is little similar work on pixel augmentation for
pixel-wise CL with a pixel granularity. In this paper, we attempt to bridge
this gap. We first classify a pixel into three categories, namely low-,
medium-, and high-informative, based on the information quantity the pixel
contains. Inspired by the ``InfoMin" principle, we then design separate
augmentation strategies for each category in terms of augmentation intensity
and sampling ratio. Extensive experiments validate that our information-guided
pixel augmentation strategy succeeds in encoding more discriminative
representations and surpassing other competitive approaches in unsupervised
local feature matching. Furthermore, our pretrained model improves the
performance of both one-shot and fully supervised models. To the best of our
knowledge, we are the first to propose a pixel augmentation method with a pixel
granularity for enhancing unsupervised pixel-wise contrastive learning.
 
       
      
        Related papers
        - SuperCL: Superpixel Guided Contrastive Learning for Medical Image   Segmentation Pre-training [17.920724846400585]
We propose a novel contrastive learning approach named SuperCL for medical image segmentation pre-training.
Our SuperCL exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies.
Experiments on 8 medical image datasets indicate our SuperCL outperforms existing 12 methods.
arXiv  Detail & Related papers  (2025-04-20T20:57:03Z) - Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels [53.8817160001038]
We propose a novel method, PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding.
To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm.
 PixelCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods.
arXiv  Detail & Related papers  (2024-09-30T01:13:03Z) - Exploring Multi-view Pixel Contrast for General and Robust Image Forgery   Localization [4.8454936010479335]
We propose a Multi-view Pixel-wise Contrastive algorithm (MPC) for image forgery localization.
Specifically, we first pre-train the backbone network with the supervised contrastive loss.
Then the localization head is fine-tuned using the cross-entropy loss, resulting in a better pixel localizer.
arXiv  Detail & Related papers  (2024-06-19T13:51:52Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
  Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv  Detail & Related papers  (2024-03-04T07:40:55Z) - Pixel-Superpixel Contrastive Learning and Pseudo-Label Correction for
  Hyperspectral Image Clustering [15.366312862496226]
Contrastive learning methods excel at existing pixel level and super pixel level HSI clustering tasks.
The super pixel-level contrastive learning method utilizes the homogeneity of HSI and reduces computing resources.
This paper proposes a pseudo-label correction module that aligns the clustering pseudo-labels of pixels and super-pixels.
arXiv  Detail & Related papers  (2023-12-15T09:19:00Z) - In-N-Out Generative Learning for Dense Unsupervised Video Segmentation [89.21483504654282]
In this paper, we focus on the unsupervised Video Object (VOS) task which learns visual correspondence from unlabeled videos.
We propose the In-aNd-Out (INO) generative learning from a purely generative perspective, which captures both high-level and fine-grained semantics.
Our INO outperforms previous state-of-the-art methods by significant margins.
arXiv  Detail & Related papers  (2022-03-29T07:56:21Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv  Detail & Related papers  (2021-11-30T07:29:08Z) - Mining Contextual Information Beyond Image for Semantic Segmentation [37.783233906684444]
The paper studies the context aggregation problem in semantic image segmentation.
It proposes to mine the contextual information beyond individual images to further augment the pixel representations.
The proposed method could be effortlessly incorporated into existing segmentation frameworks.
arXiv  Detail & Related papers  (2021-08-26T14:34:23Z) - HERS Superpixels: Deep Affinity Learning for Hierarchical Entropy Rate
  Segmentation [0.0]
We propose a two-stage graph-based framework for superpixel segmentation.
In the first stage, we introduce an efficient Deep Affinity Learning network that learns pairwise pixel affinities.
In the second stage, we propose a highly efficient superpixel method called Hierarchical Entropy Rate (HERS)
arXiv  Detail & Related papers  (2021-06-07T16:20:04Z) - Exploring Cross-Image Pixel Contrast for Semantic Segmentation [130.22216825377618]
We propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting.
The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes.
Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing.
arXiv  Detail & Related papers  (2021-01-28T11:35:32Z) - Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised
  Visual Representation Learning [60.75687261314962]
We introduce pixel-level pretext tasks for learning dense feature representations.
A pixel-to-propagation consistency task produces better results than state-of-the-art approaches.
Results demonstrate the strong potential of defining pretext tasks at the pixel level.
arXiv  Detail & Related papers  (2020-11-19T18:59:45Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.