PMAL: Open Set Recognition via Robust Prototype Mining
- URL: http://arxiv.org/abs/2203.08569v1
- Date: Wed, 16 Mar 2022 11:58:27 GMT
- Title: PMAL: Open Set Recognition via Robust Prototype Mining
- Authors: Jing Lu, Yunxu Xu, Hao Li, Zhanzhan Cheng and Yi Niu
- Abstract summary: We propose a novel Prototype Mining And Learning (PMAL) framework.
It has a prototype mining mechanism before the phase of optimizing embedding space.
We show the remarkable performance of the proposed framework compared to state-of-the-arts.
- Score: 31.326630023828187
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open Set Recognition (OSR) has been an emerging topic. Besides recognizing
predefined classes, the system needs to reject the unknowns. Prototype learning
is a potential manner to handle the problem, as its ability to improve
intra-class compactness of representations is much needed in discrimination
between the known and the unknowns. In this work, we propose a novel Prototype
Mining And Learning (PMAL) framework. It has a prototype mining mechanism
before the phase of optimizing embedding space, explicitly considering two
crucial properties, namely high-quality and diversity of the prototype set.
Concretely, a set of high-quality candidates are firstly extracted from
training samples based on data uncertainty learning, avoiding the interference
from unexpected noise. Considering the multifarious appearance of objects even
in a single category, a diversity-based strategy for prototype set filtering is
proposed. Accordingly, the embedding space can be better optimized to
discriminate therein the predefined classes and between known and unknowns.
Extensive experiments verify the two good characteristics (i.e., high-quality
and diversity) embraced in prototype mining, and show the remarkable
performance of the proposed framework compared to state-of-the-arts.
Related papers
- Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Prototype Completion for Few-Shot Learning [13.63424509914303]
Few-shot learning aims to recognize novel classes with few examples.
Pre-training based methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning.
We propose a novel prototype completion based meta-learning framework.
arXiv Detail & Related papers (2021-08-11T03:44:00Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Prototype Completion with Primitive Knowledge for Few-Shot Learning [20.449056536438658]
Few-shot learning is a challenging task, which aims to learn a classifier for novel classes with few examples.
Pre-training based meta-learning methods effectively tackle the problem by pre-training a feature extractor and then fine-tuning it through the nearest centroid based meta-learning.
We propose a novel prototype completion based meta-learning framework.
arXiv Detail & Related papers (2020-09-10T16:09:34Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.