Spacing Loss for Discovering Novel Categories
- URL: http://arxiv.org/abs/2204.10595v1
- Date: Fri, 22 Apr 2022 09:37:11 GMT
- Title: Spacing Loss for Discovering Novel Categories
- Authors: K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai
Han, Vineeth N Balasubramanian
- Abstract summary: Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data.
We first characterize existing NCD approaches into single-stage and two-stage methods based on whether they require access to labeled and unlabeled data together.
We devise a simple yet powerful loss function that enforces separability in the latent space using cues from multi-dimensional scaling.
- Score: 72.52222295216062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novel Class Discovery (NCD) is a learning paradigm, where a machine learning
model is tasked to semantically group instances from unlabeled data, by
utilizing labeled instances from a disjoint set of classes. In this work, we
first characterize existing NCD approaches into single-stage and two-stage
methods based on whether they require access to labeled and unlabeled data
together while discovering new classes. Next, we devise a simple yet powerful
loss function that enforces separability in the latent space using cues from
multi-dimensional scaling, which we refer to as Spacing Loss. Our proposed
formulation can either operate as a standalone method or can be plugged into
existing methods to enhance them. We validate the efficacy of Spacing Loss with
thorough experimental evaluation across multiple settings on CIFAR-10 and
CIFAR-100 datasets.
Related papers
- Exclusive Style Removal for Cross Domain Novel Class Discovery [15.868889486516306]
Novel Class Discovery (NCD) is a promising field in open-world learning.
We introduce an exclusive style removal module for extracting style information that is distinctive from the baseline features.
This module is easy to integrate with other NCD methods, acting as a plug-in to improve performance on novel classes with different distributions.
arXiv Detail & Related papers (2024-06-26T07:44:27Z) - Semi-Supervised End-To-End Contrastive Learning For Time Series
Classification [10.635321868623883]
Time series classification is a critical task in various domains, such as finance, healthcare, and sensor data analysis.
We propose an end-to-end model called SLOTS (Semi-supervised Learning fOr Time clasSification)
arXiv Detail & Related papers (2023-10-13T04:22:21Z) - PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for
Generalized Novel Category Discovery [39.03732147384566]
Generalized Novel Category Discovery (GNCD) setting aims to categorize unlabeled training data coming from known and novel classes.
We propose Contrastive Affinity Learning method with auxiliary visual Prompts, dubbed PromptCAL, to address this challenging problem.
Our approach discovers reliable pairwise sample affinities to learn better semantic clustering of both known and novel classes for the class token and visual prompts.
arXiv Detail & Related papers (2022-12-11T20:06:14Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Novel Class Discovery without Forgetting [72.52222295216062]
We identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting.
We propose a machine learning model to incrementally discover novel categories of instances from unlabeled data.
We introduce experimental protocols based on CIFAR-10, CIFAR-100 and ImageNet-1000 to measure the trade-off between knowledge retention and novel class discovery.
arXiv Detail & Related papers (2022-07-21T17:54:36Z) - Unsupervised Space Partitioning for Nearest Neighbor Search [6.516813715425121]
We propose an end-to-end learning framework that couples the partitioning and learning-to-search steps using a custom loss function.
A key advantage of our proposed solution is that it does not require any expensive pre-processing of the dataset.
We show that our method beats the state-of-the-art space partitioning method and the ubiquitous K-means clustering method.
arXiv Detail & Related papers (2022-06-16T11:17:03Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.