A Comprehensive Approach to Unsupervised Embedding Learning based on AND
Algorithm
- URL: http://arxiv.org/abs/2002.12158v1
- Date: Wed, 26 Feb 2020 13:22:04 GMT
- Title: A Comprehensive Approach to Unsupervised Embedding Learning based on AND
Algorithm
- Authors: Sungwon Han, Yizhan Xu, Sungwon Park, Meeyoung Cha, Cheng-Te Li
- Abstract summary: Unsupervised embedding learning aims to extract good representation from data without the need for any manual labels.
This paper proposes a new unsupervised embedding approach, called Super-AND, which extends the current state-of-the-art model.
Super-AND outperforms all existing approaches and achieves an accuracy of 89.2% on the image classification task for CIFAR-10.
- Score: 18.670975246545208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised embedding learning aims to extract good representation from data
without the need for any manual labels, which has been a critical challenge in
many supervised learning tasks. This paper proposes a new unsupervised
embedding approach, called Super-AND, which extends the current
state-of-the-art model. Super-AND has its unique set of losses that can gather
similar samples nearby within a low-density space while keeping invariant
features intact against data augmentation. Super-AND outperforms all existing
approaches and achieves an accuracy of 89.2% on the image classification task
for CIFAR-10. We discuss the practical implications of this method in assisting
semi-supervised tasks.
Related papers
- Exploiting Fine-Grained Prototype Distribution for Boosting Unsupervised Class Incremental Learning [13.17775851211893]
This paper explores a more challenging problem of unsupervised class incremental learning (UCIL)
The essence of addressing this problem lies in effectively capturing comprehensive feature representations and discovering unknown novel classes.
We propose a strategy to minimize overlap between novel and existing classes, thereby preserving historical knowledge and mitigating the phenomenon of catastrophic forgetting.
arXiv Detail & Related papers (2024-08-19T14:38:27Z) - TACLE: Task and Class-aware Exemplar-free Semi-supervised Class Incremental Learning [16.734025446561695]
We propose a novel TACLE framework to address the problem of exemplar-free semi-supervised class incremental learning.
In this scenario, at each new task, the model has to learn new classes from both labeled and unlabeled data.
In addition to leveraging the capabilities of pre-trained models, TACLE proposes a novel task-adaptive threshold.
arXiv Detail & Related papers (2024-07-10T20:46:35Z) - Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models [7.742594744641462]
Machine unlearning aims to remove information derived from forgotten data while preserving that of the remaining dataset in a well-trained model.
We propose a supervision-free unlearning approach that operates without the need for labels during the unlearning process.
arXiv Detail & Related papers (2024-03-31T00:29:00Z) - Few-Shot Point Cloud Semantic Segmentation via Contrastive
Self-Supervision and Multi-Resolution Attention [6.350163959194903]
We propose a contrastive self-supervision framework for few-shot learning pretrain.
Specifically, we implement a novel contrastive learning approach with a learnable augmentor for a 3D point cloud.
We develop a multi-resolution attention module using both the nearest and farthest points to extract the local and global point information more effectively.
arXiv Detail & Related papers (2023-02-21T07:59:31Z) - Pixel is All You Need: Adversarial Trajectory-Ensemble Active Learning
for Salient Object Detection [40.97103355628434]
It is unclear whether a saliency model trained with weakly-supervised data can achieve the equivalent performance of its fully-supervised version.
We propose a novel yet effective adversarial trajectory-ensemble active learning (ATAL)
Experimental results show that our ATAL can find such a point-labeled dataset, where a saliency model trained on it obtained $97%$ -- $99%$ performance of its fully-supervised version with only ten annotated points per image.
arXiv Detail & Related papers (2022-12-13T11:18:08Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.