MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset
- URL: http://arxiv.org/abs/2303.12756v1
- Date: Wed, 22 Mar 2023 17:08:31 GMT
- Title: MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset
- Authors: Chen Feng, Ioannis Patras
- Abstract summary: We propose a contrastive learning method, called $textbfMask$ed $textbfCon$trastive learning($textbfMaskCon$)
For each sample our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question.
Our method achieves significant improvement over the current state-of-the-art in various datasets.
- Score: 19.45520684918576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved great success in recent years with the aid of
advanced neural network structures and large-scale human-annotated datasets.
However, it is often costly and difficult to accurately and efficiently
annotate large-scale datasets, especially for some specialized domains where
fine-grained labels are required. In this setting, coarse labels are much
easier to acquire as they do not require expert knowledge. In this work, we
propose a contrastive learning method, called $\textbf{Mask}$ed
$\textbf{Con}$trastive learning~($\textbf{MaskCon}$) to address the
under-explored problem setting, where we learn with a coarse-labelled dataset
in order to address a finer labelling problem. More specifically, within the
contrastive learning framework, for each sample our method generates
soft-labels with the aid of coarse labels against other samples and another
augmented view of the sample in question. By contrast to self-supervised
contrastive learning where only the sample's augmentations are considered hard
positives, and in supervised contrastive learning where only samples with the
same coarse labels are considered hard positives, we propose soft labels based
on sample distances, that are masked by the coarse labels. This allows us to
utilize both inter-sample relations and coarse labels. We demonstrate that our
method can obtain as special cases many existing state-of-the-art works and
that it provides tighter bounds on the generalization error. Experimentally,
our method achieves significant improvement over the current state-of-the-art
in various datasets, including CIFAR10, CIFAR100, ImageNet-1K, Standford Online
Products and Stanford Cars196 datasets. Code and annotations are available at
https://github.com/MrChenFeng/MaskCon_CVPR2023.
Related papers
- MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D
Object Detection [59.1417156002086]
MixSup is a more practical paradigm simultaneously utilizing massive cheap coarse labels and a limited number of accurate labels for Mixed-grained Supervision.
MixSup achieves up to 97.31% of fully supervised performance, using cheap cluster annotations and only 10% box annotations.
arXiv Detail & Related papers (2024-01-29T17:05:19Z) - Inconsistency Masks: Removing the Uncertainty from Input-Pseudo-Label Pairs [0.0]
Inconsistency Masks (IM) is a novel approach that filters uncertainty in image-pseudo-label pairs to substantially enhance segmentation quality.
We achieve strong segmentation results with as little as 10% labeled data, across four diverse datasets.
Three of our hybrid approaches even outperform models trained on the fully labeled dataset.
arXiv Detail & Related papers (2024-01-25T18:46:35Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Robust Assignment of Labels for Active Learning with Sparse and Noisy
Annotations [0.17188280334580192]
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe.
Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice.
We propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space.
arXiv Detail & Related papers (2023-07-25T19:40:41Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled
Images as Reference [90.5402652758316]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
It uses labeled information to guide the learning of unlabeled instances.
It achieves competitive segmentation accuracy and significantly improves the mIoU by +7$%$ compared to previous approaches.
arXiv Detail & Related papers (2021-12-28T06:48:03Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - OpenMix: Reviving Known Knowledge for Discovering Novel Visual
Categories in An Open World [127.64076228829606]
We introduce OpenMix to mix the unlabeled examples from an open set and the labeled examples from known classes.
OpenMix helps to prevent the model from overfitting on unlabeled samples that may be assigned with wrong pseudo-labels.
arXiv Detail & Related papers (2020-04-12T05:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.