FasterAI: A Lightweight Library for Creating Sparse Neural Networks
- URL: http://arxiv.org/abs/2207.01088v1
- Date: Sun, 3 Jul 2022 18:13:47 GMT
- Title: FasterAI: A Lightweight Library for Creating Sparse Neural Networks
- Authors: Nathan Hubens
- Abstract summary: FasterAI is a PyTorch-based library aiming to facilitate the utilization of deep neural networks compression techniques.
In this paper, we focus on the sparsifying capabilities of FasterAI, which represents the core of the library.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: FasterAI is a PyTorch-based library, aiming to facilitate the utilization of
deep neural networks compression techniques such as sparsification, pruning,
knowledge distillation, or regularization. The library is built with the
purpose of enabling quick implementation and experimentation. More
particularly, compression techniques are leveraging Callback systems of
libraries such as fastai and Pytorch Lightning to bring a user-friendly and
high-level API. The main asset of FasterAI is its lightweight, yet powerful,
simplicity of use. Indeed, because it was developed in a very granular way,
users can create thousands of unique experiments by using different
combinations of parameters. In this paper, we focus on the sparsifying
capabilities of FasterAI, which represents the core of the library. Performing
sparsification of a neural network in FasterAI only requires a single
additional line of code in the traditional training loop, yet allows to perform
state-of-the-art techniques such as Lottery Ticket Hypothesis experiments
Related papers
- torchgfn: A PyTorch GFlowNet library [56.071033896777784]
torchgfn is a PyTorch library that aims to address this need.
It provides users with a simple API for environments and useful abstractions for samplers and losses.
arXiv Detail & Related papers (2023-05-24T00:20:59Z) - TorchNTK: A Library for Calculation of Neural Tangent Kernels of PyTorch
Models [16.30276204466139]
We introduce torchNTK, a python library to calculate the empirical neural tangent kernel (NTK) of neural network models in the PyTorch framework.
A feature of the library is that we expose the user to layerwise NTK components, and show that in some regimes a layerwise calculation is more memory efficient.
arXiv Detail & Related papers (2022-05-24T21:27:58Z) - Opacus: User-Friendly Differential Privacy Library in PyTorch [54.8720687562153]
We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy.
It provides a simple and user-friendly API, and enables machine learning practitioners to make a training pipeline private by adding as little as two lines to their code.
arXiv Detail & Related papers (2021-09-25T07:10:54Z) - Solo-learn: A Library of Self-supervised Methods for Visual
Representation Learning [83.02597612195966]
solo-learn is a library of self-supervised methods for visual representation learning.
Implemented in Python, using Pytorch and Pytorch lightning, the library fits both research and industry needs.
arXiv Detail & Related papers (2021-08-03T22:19:55Z) - Podracer architectures for scalable Reinforcement Learning [23.369001500657028]
How to best train reinforcement learning (RL) agents at scale is still an active research area.
In this report we argue that TPUs are particularly well suited for training RL agents in a scalable, efficient and reproducible way.
arXiv Detail & Related papers (2021-04-13T15:05:35Z) - TorchRadon: Fast Differentiable Routines for Computed Tomography [0.0]
The TorchRadon library is designed to help researchers working on CT problems to combine deep learning and model-based approaches.
Compared to the existing Astra Toolbox, TorchRadon is up to 125 faster.
Because of its speed and GPU support, TorchRadon can also be effectively used as a fast backend for the implementation of iterative algorithms.
arXiv Detail & Related papers (2020-09-29T09:20:22Z) - Collaborative Learning for Faster StyleGAN Embedding [127.84690280196597]
We propose a novel collaborative learning framework that consists of an efficient embedding network and an optimization-based iterator.
High-quality latent code can be obtained efficiently with a single forward pass through our embedding network.
arXiv Detail & Related papers (2020-07-03T15:27:37Z) - Neural Network Compression Framework for fast model inference [59.65531492759006]
We present a new framework for neural networks compression with fine-tuning, which we called Neural Network Compression Framework (NNCF)
It leverages recent advances of various network compression methods and implements some of them, such as sparsity, quantization, and binarization.
The framework can be used within the training samples, which are supplied with it, or as a standalone package that can be seamlessly integrated into the existing training code.
arXiv Detail & Related papers (2020-02-20T11:24:01Z) - fastai: A Layered API for Deep Learning [1.7223564681760164]
fastai is a deep learning library which provides practitioners with high-level components.
It provides researchers with low-level components that can be mixed and matched to build new approaches.
arXiv Detail & Related papers (2020-02-11T21:16:48Z) - Torch-Struct: Deep Structured Prediction Library [138.5262350501951]
We introduce Torch-Struct, a library for structured prediction.
Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API.
arXiv Detail & Related papers (2020-02-03T16:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.