OpenMix: Reviving Known Knowledge for Discovering Novel Visual
Categories in An Open World
- URL: http://arxiv.org/abs/2004.05551v1
- Date: Sun, 12 Apr 2020 05:52:39 GMT
- Title: OpenMix: Reviving Known Knowledge for Discovering Novel Visual
Categories in An Open World
- Authors: Zhun Zhong, Linchao Zhu, Zhiming Luo, Shaozi Li, Yi Yang, Nicu Sebe
- Abstract summary: We introduce OpenMix to mix the unlabeled examples from an open set and the labeled examples from known classes.
OpenMix helps to prevent the model from overfitting on unlabeled samples that may be assigned with wrong pseudo-labels.
- Score: 127.64076228829606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we tackle the problem of discovering new classes in unlabeled
visual data given labeled data from disjoint classes. Existing methods
typically first pre-train a model with labeled data, and then identify new
classes in unlabeled data via unsupervised clustering. However, the labeled
data that provide essential knowledge are often underexplored in the second
step. The challenge is that the labeled and unlabeled examples are from
non-overlapping classes, which makes it difficult to build the learning
relationship between them. In this work, we introduce OpenMix to mix the
unlabeled examples from an open set and the labeled examples from known
classes, where their non-overlapping labels and pseudo-labels are
simultaneously mixed into a joint label distribution. OpenMix dynamically
compounds examples in two ways. First, we produce mixed training images by
incorporating labeled examples with unlabeled examples. With the benefits of
unique prior knowledge in novel class discovery, the generated pseudo-labels
will be more credible than the original unlabeled predictions. As a result,
OpenMix helps to prevent the model from overfitting on unlabeled samples that
may be assigned with wrong pseudo-labels. Second, the first way encourages the
unlabeled examples with high class-probabilities to have considerable accuracy.
We introduce these examples as reliable anchors and further integrate them with
unlabeled samples. This enables us to generate more combinations in unlabeled
examples and exploit finer object relations among the new classes. Experiments
on three classification datasets demonstrate the effectiveness of the proposed
OpenMix, which is superior to state-of-the-art methods in novel class
discovery.
Related papers
- Towards Imbalanced Large Scale Multi-label Classification with Partially
Annotated Labels [8.977819892091]
Multi-label classification is a widely encountered problem in daily life, where an instance can be associated with multiple classes.
In this work, we address the issue of label imbalance and investigate how to train neural networks using partial labels.
arXiv Detail & Related papers (2023-07-31T21:50:48Z) - SLaM: Student-Label Mixing for Distillation with Unlabeled Examples [15.825078347452024]
We present a principled method for knowledge distillation with unlabeled examples that we call Student-Label Mixing (SLaM)
SLaM consistently improves over prior approaches by evaluating it on several standard benchmarks.
We give an algorithm improving the best-known sample complexity for learning halfspaces with margin under random classification noise.
arXiv Detail & Related papers (2023-02-08T00:14:44Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - GuidedMix-Net: Semi-supervised Semantic Segmentation by Using Labeled
Images as Reference [90.5402652758316]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
It uses labeled information to guide the learning of unlabeled instances.
It achieves competitive segmentation accuracy and significantly improves the mIoU by +7$%$ compared to previous approaches.
arXiv Detail & Related papers (2021-12-28T06:48:03Z) - OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set
Unlabeled Data [65.19205979542305]
Unlabeled data may include out-of-class samples in practice.
OpenCoS is a method for handling this realistic semi-supervised learning scenario.
arXiv Detail & Related papers (2021-06-29T06:10:05Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - Automatically Discovering and Learning New Visual Categories with
Ranking Statistics [145.89790963544314]
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
arXiv Detail & Related papers (2020-02-13T18:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.