Learning Semantic Correspondence with Sparse Annotations
- URL: http://arxiv.org/abs/2208.06974v2
- Date: Wed, 17 Aug 2022 17:59:18 GMT
- Title: Learning Semantic Correspondence with Sparse Annotations
- Authors: Shuaiyi Huang, Luyu Yang, Bo He, Songyang Zhang, Xuming He, Abhinav
Shrivastava
- Abstract summary: Finding dense semantic correspondence is a fundamental problem in computer vision.
We propose a teacher-student learning paradigm for generating dense pseudo-labels.
We also develop two novel strategies for denoising pseudo-labels.
- Score: 66.37298464505261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding dense semantic correspondence is a fundamental problem in computer
vision, which remains challenging in complex scenes due to background clutter,
extreme intra-class variation, and a severe lack of ground truth. In this
paper, we aim to address the challenge of label sparsity in semantic
correspondence by enriching supervision signals from sparse keypoint
annotations. To this end, we first propose a teacher-student learning paradigm
for generating dense pseudo-labels and then develop two novel strategies for
denoising pseudo-labels. In particular, we use spatial priors around the sparse
annotations to suppress the noisy pseudo-labels. In addition, we introduce a
loss-driven dynamic label selection strategy for label denoising. We
instantiate our paradigm with two variants of learning strategies: a single
offline teacher setting, and mutual online teachers setting. Our approach
achieves notable improvements on three challenging benchmarks for semantic
correspondence and establishes the new state-of-the-art. Project page:
https://shuaiyihuang.github.io/publications/SCorrSAN.
Related papers
- Dual-level Adaptive Self-Labeling for Novel Class Discovery in Point Cloud Segmentation [15.000460515557211]
We tackle the novel class discovery in point cloud segmentation, which discovers novel classes based on the semantic knowledge of seen classes.
Existing work proposes an online point-wise clustering method with a simplified equal class-size constraint on the novel classes to avoid degenerate solutions.
We propose a novel self-labeling strategy that adaptively generates high-quality pseudo-labels for imbalanced classes during model training.
arXiv Detail & Related papers (2024-07-17T11:14:46Z) - Scribble Hides Class: Promoting Scribble-Based Weakly-Supervised
Semantic Segmentation with Its Class Label [16.745019028033518]
We propose a class-driven scribble promotion network, which utilizes both scribble annotations and pseudo-labels informed by image-level classes and global semantics for supervision.
Experiments on the ScribbleSup dataset with different qualities of scribble annotations outperform all the previous methods, demonstrating the superiority and robustness of our method.
arXiv Detail & Related papers (2024-02-27T14:51:56Z) - DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition
with Limited Annotations [79.433122872973]
Multi-label image recognition in the low-label regime is a task of great challenge and practical significance.
We leverage the powerful alignment between textual and visual features pretrained with millions of auxiliary image-text pairs.
We introduce an efficient and effective framework called Evidence-guided Dual Context Optimization (DualCoOp++)
arXiv Detail & Related papers (2023-08-03T17:33:20Z) - Edge Guided GANs with Multi-Scale Contrastive Learning for Semantic
Image Synthesis [139.2216271759332]
We propose a novel ECGAN for the challenging semantic image synthesis task.
The semantic labels do not provide detailed structural information, making it challenging to synthesize local details and structures.
The widely adopted CNN operations such as convolution, down-sampling, and normalization usually cause spatial resolution loss.
We propose a novel contrastive learning method, which aims to enforce pixel embeddings belonging to the same semantic class to generate more similar image content.
arXiv Detail & Related papers (2023-07-22T14:17:19Z) - Semantic Contrastive Bootstrapping for Single-positive Multi-label
Recognition [36.3636416735057]
We present a semantic contrastive bootstrapping (Scob) approach to gradually recover the cross-object relationships.
We then propose a recurrent semantic masked transformer to extract iconic object-level representations.
Extensive experimental results demonstrate that the proposed joint learning framework surpasses the state-of-the-art models.
arXiv Detail & Related papers (2023-07-15T01:59:53Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels [26.542718087103665]
SemiMatch is a semi-supervised solution for establishing dense correspondences across semantically similar images.
Our framework generates the pseudo-labels using the model's prediction itself between source and weakly-augmented target, and uses pseudo-labels to learn the model again between source and strongly-augmented target.
In experiments, SemiMatch achieves state-of-the-art performance on various benchmarks, especially on PF-Willow by a large margin.
arXiv Detail & Related papers (2022-03-30T03:52:50Z) - Beyond Semantic to Instance Segmentation: Weakly-Supervised Instance
Segmentation via Semantic Knowledge Transfer and Self-Refinement [31.42799434158569]
weakly-supervised instance segmentation (WSIS) is a more challenging task because instance-wise localization using only image-level labels is difficult.
We propose a novel approach that consists of two innovative components.
First, we design a semantic knowledge transfer to obtain pseudo instance labels by transferring the knowledge of WSSS to WSIS.
Second, we propose a self-refinement method that refines the pseudo instance labels in a self-supervised scheme and employs them to the training in an online manner.
arXiv Detail & Related papers (2021-09-20T12:31:44Z) - Refining Pseudo Labels with Clustering Consensus over Generations for
Unsupervised Object Re-identification [84.72303377833732]
Unsupervised object re-identification targets at learning discriminative representations for object retrieval without any annotations.
We propose to estimate pseudo label similarities between consecutive training generations with clustering consensus and refine pseudo labels with temporally propagated and ensembled pseudo labels.
The proposed pseudo label refinery strategy is simple yet effective and can be seamlessly integrated into existing clustering-based unsupervised re-identification methods.
arXiv Detail & Related papers (2021-06-11T02:42:42Z) - Learning Not to Learn in the Presence of Noisy Labels [104.7655376309784]
We show that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption.
We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels.
arXiv Detail & Related papers (2020-02-16T09:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.