Weakly Supervised Representation Learning with Coarse Labels
- URL: http://arxiv.org/abs/2005.09681v3
- Date: Tue, 24 Aug 2021 04:33:45 GMT
- Title: Weakly Supervised Representation Learning with Coarse Labels
- Authors: Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Juhua Hu
- Abstract summary: Deep learning can learn discriminative patterns from raw materials directly in a task-dependent manner.
For some real-world applications, it is too expensive to collect the task-specific labels, such as visual search in online shopping.
We propose an algorithm to learn the fine-grained patterns for the target task, when only its coarse-class labels are available.
- Score: 29.67549798642795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of computational power and techniques for data
collection, deep learning demonstrates a superior performance over most
existing algorithms on visual benchmark data sets. Many efforts have been
devoted to studying the mechanism of deep learning. One important observation
is that deep learning can learn the discriminative patterns from raw materials
directly in a task-dependent manner. Therefore, the representations obtained by
deep learning outperform hand-crafted features significantly. However, for some
real-world applications, it is too expensive to collect the task-specific
labels, such as visual search in online shopping. Compared to the limited
availability of these task-specific labels, their coarse-class labels are much
more affordable, but representations learned from them can be suboptimal for
the target task. To mitigate this challenge, we propose an algorithm to learn
the fine-grained patterns for the target task, when only its coarse-class
labels are available. More importantly, we provide a theoretical guarantee for
this. Extensive experiments on real-world data sets demonstrate that the
proposed method can significantly improve the performance of learned
representations on the target task, when only coarse-class information is
available for training. Code is available at
\url{https://github.com/idstcv/CoIns}.
Related papers
- Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - Improving Model Training via Self-learned Label Representations [5.969349640156469]
We show that more sophisticated label representations are better for classification than the usual one-hot encoding.
We propose Learning with Adaptive Labels (LwAL) algorithm, which simultaneously learns the label representation while training for the classification task.
Our algorithm introduces negligible additional parameters and has a minimal computational overhead.
arXiv Detail & Related papers (2022-09-09T21:10:43Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Investigating Power laws in Deep Representation Learning [4.996066540156903]
We propose a framework to evaluate the quality of representations in unlabelled datasets.
We estimate the coefficient of the power law, $alpha$, across three key attributes which influence representation learning.
Notably, $alpha$ is computable from the representations without knowledge of any labels, thereby offering a framework to evaluate the quality of representations in unlabelled datasets.
arXiv Detail & Related papers (2022-02-11T18:11:32Z) - Learning Representations for Pixel-based Control: What Matters and Why? [22.177382138487566]
We present a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning.
Our results show that finer categorization of benchmarks on the basis of characteristics like density of reward, planning horizon of the problem, presence of task-irrelevant components, etc., is crucial in evaluating algorithms.
arXiv Detail & Related papers (2021-11-15T14:16:28Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Automated Self-Supervised Learning for Graphs [37.14382990139527]
This work aims to investigate how to automatically leverage multiple pretext tasks effectively.
We make use of a key principle of many real-world graphs, i.e., homophily, as the guidance to effectively search various self-supervised pretext tasks.
We propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
arXiv Detail & Related papers (2021-06-10T03:09:20Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Learning Purified Feature Representations from Task-irrelevant Labels [18.967445416679624]
We propose a novel learning framework called PurifiedLearning to exploit task-irrelevant features extracted from task-irrelevant labels.
Our work is built on solid theoretical analysis and extensive experiments, which demonstrate the effectiveness of PurifiedLearning.
arXiv Detail & Related papers (2021-02-22T12:50:49Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.