Solo-learn: A Library of Self-supervised Methods for Visual
Representation Learning
- URL: http://arxiv.org/abs/2108.01775v1
- Date: Tue, 3 Aug 2021 22:19:55 GMT
- Title: Solo-learn: A Library of Self-supervised Methods for Visual
Representation Learning
- Authors: Victor G. Turrisi da Costa and Enrico Fini and Moin Nabi and Nicu Sebe
and Elisa Ricci
- Abstract summary: solo-learn is a library of self-supervised methods for visual representation learning.
Implemented in Python, using Pytorch and Pytorch lightning, the library fits both research and industry needs.
- Score: 83.02597612195966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents solo-learn, a library of self-supervised methods for
visual representation learning. Implemented in Python, using Pytorch and
Pytorch lightning, the library fits both research and industry needs by
featuring distributed training pipelines with mixed-precision, faster data
loading via Nvidia DALI, online linear evaluation for better prototyping, and
many additional training tricks. Our goal is to provide an easy-to-use library
comprising a large amount of Self-supervised Learning (SSL) methods, that can
be easily extended and fine-tuned by the community. solo-learn opens up avenues
for exploiting large-budget SSL solutions on inexpensive smaller
infrastructures and seeks to democratize SSL by making it accessible to all.
The source code is available at https://github.com/vturrisi/solo-learn.
Related papers
- BackboneLearn: A Library for Scaling Mixed-Integer Optimization-Based
Machine Learning [0.0]
BackboneLearn is a framework for scaling mixed-integer optimization problems with indicator variables to high-dimensional problems.
BackboneLearn is built in Python and is user-friendly and easily implementable.
The source code of BackboneLearn is available on GitHub.
arXiv Detail & Related papers (2023-11-22T21:07:45Z) - Joint Prediction and Denoising for Large-scale Multilingual
Self-supervised Learning [69.77973092264338]
We show that more powerful techniques can lead to more efficient pre-training, opening SSL to more research groups.
We propose WavLabLM, which extends WavLM's joint prediction and denoising to 40k hours of data across 136 languages.
We show that further efficiency can be achieved with a vanilla HuBERT Base model, which can maintain 94% of XLS-R's performance with only 3% of the data.
arXiv Detail & Related papers (2023-09-26T23:55:57Z) - torchgfn: A PyTorch GFlowNet library [56.071033896777784]
torchgfn is a PyTorch library that aims to address this need.
It provides users with a simple API for environments and useful abstractions for samplers and losses.
arXiv Detail & Related papers (2023-05-24T00:20:59Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - IMBENS: Ensemble Class-imbalanced Learning in Python [26.007498723608155]
imbens is an open-source Python toolbox for implementing and deploying ensemble learning algorithms on class-imbalanced data.
imbens is released under the MIT open-source license and can be installed from Python Package Index (PyPI)
arXiv Detail & Related papers (2021-11-24T20:14:20Z) - Rethinking Self-Supervised Learning: Small is Beautiful [30.809693803413445]
We propose scaled-down self-supervised learning (S3L), which include 3 parts: small resolution, small architecture and small data.
On a diverse set of datasets, S3L achieves higher accuracy consistently with much less training cost when compared to previous SSL learning paradigm.
arXiv Detail & Related papers (2021-03-25T01:48:52Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - fastai: A Layered API for Deep Learning [1.7223564681760164]
fastai is a deep learning library which provides practitioners with high-level components.
It provides researchers with low-level components that can be mixed and matched to build new approaches.
arXiv Detail & Related papers (2020-02-11T21:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.