LAMDA-SSL: Semi-Supervised Learning in Python
- URL: http://arxiv.org/abs/2208.04610v2
- Date: Mon, 22 May 2023 08:19:32 GMT
- Title: LAMDA-SSL: Semi-Supervised Learning in Python
- Authors: Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li
- Abstract summary: LAMDA-SSL is open-sourced on GitHub and its detailed usage documentation is available at https://ygzwqzd.github.io/LAMDA-SSL/.
This documentation greatly reduces the cost of familiarizing users with LAMDA-SSL toolkit and SSL algorithms.
- Score: 56.14115592683035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LAMDA-SSL is open-sourced on GitHub and its detailed usage documentation is
available at https://ygzwqzd.github.io/LAMDA-SSL/. This documentation
introduces LAMDA-SSL in detail from various aspects and can be divided into
four parts. The first part introduces the design idea, features and functions
of LAMDA-SSL. The second part shows the usage of LAMDA-SSL by abundant examples
in detail. The third part introduces all algorithms implemented by LAMDA-SSL to
help users quickly understand and choose SSL algorithms. The fourth part shows
the APIs of LAMDA-SSL. This detailed documentation greatly reduces the cost of
familiarizing users with LAMDA-SSL toolkit and SSL algorithms.
Related papers
- A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends [82.64268080902742]
Self-supervised learning (SSL) aims to learn discriminative features from unlabeled data without relying on human-annotated labels.
SSL has garnered significant attention recently, leading to the development of numerous related algorithms.
This paper presents a review of diverse SSL methods, encompassing algorithmic aspects, application domains, three key trends, and open research questions.
arXiv Detail & Related papers (2023-01-13T14:41:05Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Open-Domain Sign Language Translation Learned from Online Video [32.89182994277633]
We introduce OpenASL, a large-scale ASL-English dataset collected from online video sites.
OpenASL contains 288 hours of ASL videos in various domains from over 200 signers.
We propose a set of techniques including sign search as a pretext task for pre-training and fusion of mouthing and handshape features.
arXiv Detail & Related papers (2022-05-25T15:43:31Z) - Sound and Visual Representation Learning with Multiple Pretraining Tasks [104.11800812671953]
Self-supervised tasks (SSL) reveal different features from the data.
This work aims to combine Multiple SSL tasks (Multi-SSL) that generalizes well for all downstream tasks.
Experiments on sound representations demonstrate that Multi-SSL via incremental learning (IL) of SSL tasks outperforms single SSL task models.
arXiv Detail & Related papers (2022-01-04T09:09:38Z) - How Self-Supervised Learning Can be Used for Fine-Grained Head Pose
Estimation? [2.0625936401496237]
We have tried to answer a question: How SSL can be used for Head Pose estimation?
modified versions of jigsaw puzzling and rotation as SSL pre-text tasks are used.
The error rate reduced by the HTML method up to 11% compare to the SL.
arXiv Detail & Related papers (2021-08-10T19:34:45Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - Interventional Few-Shot Learning [88.31112565383457]
We propose a novel Few-Shot Learning paradigm: Interventional Few-Shot Learning.
Code is released at https://github.com/yue-zhongqi/ifsl.
arXiv Detail & Related papers (2020-09-28T01:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.