HypLL: The Hyperbolic Learning Library
- URL: http://arxiv.org/abs/2306.06154v3
- Date: Tue, 19 Dec 2023 15:37:22 GMT
- Title: HypLL: The Hyperbolic Learning Library
- Authors: Max van Spengler, Philipp Wirth, Pascal Mettes
- Abstract summary: We present HypLL, the Hyperbolic Learning Library to bring the progress on hyperbolic deep learning together.
HypLL is built on top of PyTorch, with an emphasis on ease-of-use to attract a broad audience towards this new and open-ended research direction.
- Score: 14.760891078342166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning in hyperbolic space is quickly gaining traction in the fields
of machine learning, multimedia, and computer vision. Deep networks commonly
operate in Euclidean space, implicitly assuming that data lies on regular
grids. Recent advances have shown that hyperbolic geometry provides a viable
alternative foundation for deep learning, especially when data is hierarchical
in nature and when working with few embedding dimensions. Currently however, no
accessible open-source library exists to build hyperbolic network modules akin
to well-known deep learning libraries. We present HypLL, the Hyperbolic
Learning Library to bring the progress on hyperbolic deep learning together.
HypLL is built on top of PyTorch, with an emphasis in its design for
ease-of-use, in order to attract a broad audience towards this new and
open-ended research direction. The code is available at:
https://github.com/maxvanspengler/hyperbolic_learning_library.
Related papers
- Hyperbolic Convolutional Neural Networks [14.35618845900589]
Using non-Euclidean space for embedding data might result in more robust and explainable models.
We hypothesize that ability of hyperbolic space to capture hierarchy in the data would lead to better performance.
arXiv Detail & Related papers (2023-08-29T21:20:16Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - Avalanche: A PyTorch Library for Deep Continual Learning [12.238684710313168]
Continual learning is the problem of learning from a nonstationary stream of data.
Avalanche is an open source library maintained by the ContinualAI non-profit organization.
arXiv Detail & Related papers (2023-02-02T10:45:20Z) - NASiam: Efficient Representation Learning using Neural Architecture
Search for Siamese Networks [76.8112416450677]
Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL)
NASiam is a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair)
NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image classification datasets while costing only a few GPU hours.
arXiv Detail & Related papers (2023-01-31T19:48:37Z) - APP: Anytime Progressive Pruning [104.36308667437397]
We propose a novel way of training a neural network with a target sparsity in a particular case of online learning: the anytime learning at macroscale paradigm (ALMA)
The proposed approach significantly outperforms the baseline dense and Anytime OSP models across multiple architectures and datasets under short, moderate, and long-sequence training.
arXiv Detail & Related papers (2022-04-04T16:38:55Z) - Solo-learn: A Library of Self-supervised Methods for Visual
Representation Learning [83.02597612195966]
solo-learn is a library of self-supervised methods for visual representation learning.
Implemented in Python, using Pytorch and Pytorch lightning, the library fits both research and industry needs.
arXiv Detail & Related papers (2021-08-03T22:19:55Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - DIG: A Turnkey Library for Diving into Graph Deep Learning Research [39.58666190541479]
DIG: Dive into Graphs is a research-oriented library that integrates and unified implementations of common graph deep learning algorithms for several advanced tasks.
For each direction, we provide unified implementations of data interfaces, common algorithms, and evaluation metrics.
arXiv Detail & Related papers (2021-03-23T15:05:10Z) - Hyperbolic Deep Neural Networks: A Survey [31.04110049167551]
We refer to the model as hyperbolic deep neural network in this paper.
To stimulate future research, this paper presents acoherent and comprehensive review of the literature around the neural components in the construction of hyperbolic deep neuralnetworks.
arXiv Detail & Related papers (2021-01-12T15:55:16Z) - Robust Large-Margin Learning in Hyperbolic Space [64.42251583239347]
We present the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.
We provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.
We prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees.
arXiv Detail & Related papers (2020-04-11T19:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.