SSDL: Self-Supervised Dictionary Learning
- URL: http://arxiv.org/abs/2112.01790v1
- Date: Fri, 3 Dec 2021 08:55:08 GMT
- Title: SSDL: Self-Supervised Dictionary Learning
- Authors: Shuai Shao, Lei Xing, Wei Yu, Rui Xu, Yanjiang Wang, Baodi Liu
- Abstract summary: We propose a Self-Supervised Dictionary Learning (SSDL) framework to address this challenge.
Specifically, we first design a $p$-Laplacian Attention Hypergraph Learning block as the pretext task to generate pseudo soft labels for DL.
Then, we adopt the pseudo labels to train a dictionary from a primary label-embedded DL method.
- Score: 20.925371262076744
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The label-embedded dictionary learning (DL) algorithms generate influential
dictionaries by introducing discriminative information. However, there exists a
limitation: All the label-embedded DL methods rely on the labels due that this
way merely achieves ideal performances in supervised learning. While in
semi-supervised and unsupervised learning, it is no longer sufficient to be
effective. Inspired by the concept of self-supervised learning (e.g., setting
the pretext task to generate a universal model for the downstream task), we
propose a Self-Supervised Dictionary Learning (SSDL) framework to address this
challenge. Specifically, we first design a $p$-Laplacian Attention Hypergraph
Learning (pAHL) block as the pretext task to generate pseudo soft labels for
DL. Then, we adopt the pseudo labels to train a dictionary from a primary
label-embedded DL method. We evaluate our SSDL on two human activity
recognition datasets. The comparison results with other state-of-the-art
methods have demonstrated the efficiency of SSDL.
Related papers
- Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Decoupling Pseudo Label Disambiguation and Representation Learning for
Generalized Intent Discovery [24.45800271294178]
Key challenges lie in pseudo label disambiguation and representation learning.
We propose a decoupled prototype learning framework (DPL) to decouple pseudo label disambiguation and representation learning.
Experiments and analysis on three benchmark datasets show the effectiveness of our method.
arXiv Detail & Related papers (2023-05-28T12:01:34Z) - PIEClass: Weakly-Supervised Text Classification with Prompting and
Noise-Robust Iterative Ensemble Training [42.013879670590214]
Weakly-supervised text classification trains a classifier using the label name of each target class as the only supervision.
We propose a new method, PIEClass, consisting of two modules.
PIEClass achieves overall better performance than existing strong baselines on seven benchmark datasets.
arXiv Detail & Related papers (2023-05-23T06:19:14Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - Disambiguation of Company names via Deep Recurrent Networks [101.90357454833845]
We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings.
We analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline.
arXiv Detail & Related papers (2023-03-07T15:07:57Z) - Learning with Partial Labels from Semi-supervised Perspective [28.735185883881172]
Partial Label (PL) learning refers to the task of learning from partially labeled data.
We propose a novel PL learning method, namely Partial Label learning with Semi-Supervised Perspective (PLSP)
PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels.
arXiv Detail & Related papers (2022-11-24T15:12:16Z) - Robotic Skill Acquisition via Instruction Augmentation with
Vision-Language Models [70.82705830137708]
We introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL)
We utilize semi-language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data.
DIAL enables imitation learning policies to acquire new capabilities and generalize to 60 novel instructions unseen in the original dataset.
arXiv Detail & Related papers (2022-11-21T18:56:00Z) - LST: Lexicon-Guided Self-Training for Few-Shot Text Classification [3.7277082975620806]
We introduce LST, a simple self-training method that uses a lexicon to guide the pseudo-labeling mechanism.
We demonstrate that this simple yet well-crafted lexical knowledge achieves 1.0-2.0% better performance on 30 labeled samples per class for five benchmark datasets.
arXiv Detail & Related papers (2022-02-05T14:33:12Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - DLDL: Dynamic Label Dictionary Learning via Hypergraph Regularization [17.34373273007931]
We propose a Dynamic Label Dictionary Learning (DLDL) algorithm to generate the soft label matrix for unlabeled data.
Specifically, we employ hypergraph manifold regularization to keep the relations among original data, transformed data, and soft labels consistent.
arXiv Detail & Related papers (2020-10-23T14:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.