QDax: A Library for Quality-Diversity and Population-based Algorithms
with Hardware Acceleration
- URL: http://arxiv.org/abs/2308.03665v1
- Date: Mon, 7 Aug 2023 15:29:44 GMT
- Title: QDax: A Library for Quality-Diversity and Population-based Algorithms
with Hardware Acceleration
- Authors: Felix Chalumeau, Bryan Lim, Raphael Boige, Maxime Allard, Luca
Grillotti, Manon Flageat, Valentin Mac\'e, Arthur Flajolet, Thomas Pierrot,
Antoine Cully
- Abstract summary: QDax is an open-source library with a streamlined and modular API for Quality-Diversity (QD) optimization algorithms in Jax.
The library serves as a versatile tool for optimization purposes, ranging from black-box optimization to continuous control.
- Score: 3.8494302715990845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: QDax is an open-source library with a streamlined and modular API for
Quality-Diversity (QD) optimization algorithms in Jax. The library serves as a
versatile tool for optimization purposes, ranging from black-box optimization
to continuous control. QDax offers implementations of popular QD,
Neuroevolution, and Reinforcement Learning (RL) algorithms, supported by
various examples. All the implementations can be just-in-time compiled with
Jax, facilitating efficient execution across multiple accelerators, including
GPUs and TPUs. These implementations effectively demonstrate the framework's
flexibility and user-friendliness, easing experimentation for research
purposes. Furthermore, the library is thoroughly documented and tested with
95\% coverage.
Related papers
- CoverLib: Classifiers-equipped Experience Library by Iterative Problem Distribution Coverage Maximization for Domain-tuned Motion Planning [14.580628884001593]
CoverLib iteratively adds an experience-classifier-pair to the library.
It selects the next experience based on its ability to effectively cover the uncovered region.
It achieves both fast planning and high success rates over the problem domain.
arXiv Detail & Related papers (2024-05-05T15:27:05Z) - JaxUED: A simple and useable UED library in Jax [1.5821811088000381]
We present JaxUED, an open-source library providing minimal dependency implementations of modern Unsupervised Environment Design (UED) algorithms in Jax.
Inspired by CleanRL, we provide fast, clear, understandable, and easily modifiable implementations, with the aim of accelerating research into UED.
arXiv Detail & Related papers (2024-03-19T18:40:50Z) - XuanCe: A Comprehensive and Unified Deep Reinforcement Learning Library [18.603206638756056]
XuanCe is a comprehensive and unified deep reinforcement learning (DRL) library.
XuanCe offers a wide range of functionalities, including over 40 classical DRL and multi-agent DRL algorithms.
XuanCe is open-source and can be accessed at https://agi-brain.com/agi-brain/xuance.git.
arXiv Detail & Related papers (2023-12-25T14:45:39Z) - JaxMARL: Multi-Agent RL Environments and Algorithms in JAX [105.343918678781]
We present JaxMARL, the first open-source, Python-based library that combines GPU-enabled efficiency with support for a large number of commonly used MARL environments.
Our experiments show that, in terms of wall clock time, our JAX-based training pipeline is around 14 times faster than existing approaches.
We also introduce and benchmark SMAX, a JAX-based approximate reimplementation of the popular StarCraft Multi-Agent Challenge.
arXiv Detail & Related papers (2023-11-16T18:58:43Z) - LibAUC: A Deep Learning Library for X-Risk Optimization [43.32145407575245]
This paper introduces the award-winning deep learning (DL) library called LibAUC.
LibAUC implements state-of-the-art algorithms towards optimizing a family of risk functions named X-risks.
arXiv Detail & Related papers (2023-06-05T17:43:46Z) - Improving and Benchmarking Offline Reinforcement Learning Algorithms [87.67996706673674]
This work aims to bridge the gaps caused by low-level choices and datasets.
We empirically investigate 20 implementation choices using three representative algorithms.
We find two variants CRR+ and CQL+ achieving new state-of-the-art on D4RL.
arXiv Detail & Related papers (2023-06-01T17:58:46Z) - JaxPruner: A concise library for sparsity research [46.153423603424]
JaxPruner is an open-source library for sparse neural network research.
It implements popular pruning and sparse training algorithms with minimal memory and latency overhead.
arXiv Detail & Related papers (2023-04-27T10:45:30Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.