DistML.js: Installation-free Distributed Deep Learning Framework for Web Browsers
- URL: http://arxiv.org/abs/2407.01023v1
- Date: Mon, 1 Jul 2024 07:13:14 GMT
- Title: DistML.js: Installation-free Distributed Deep Learning Framework for Web Browsers
- Authors: Masatoshi Hidaka, Tomohiro Hashimoto, Yuto Nishizawa, Tatsuya Harada,
- Abstract summary: "DistML.js" is a library designed for training and inference of machine learning models within web browsers.
We provide a comprehensive explanation of DistML.js's design, API, and implementation, alongside practical applications.
- Score: 40.48978035180545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present "DistML.js", a library designed for training and inference of machine learning models within web browsers. Not only does DistML.js facilitate model training on local devices, but it also supports distributed learning through communication with servers. Its design and define-by-run API for deep learning model construction resemble PyTorch, thereby reducing the learning curve for prototyping. Matrix computations involved in model training and inference are executed on the backend utilizing WebGL, enabling high-speed calculations. We provide a comprehensive explanation of DistML.js's design, API, and implementation, alongside practical applications including data parallelism in learning. The source code is publicly available at https://github.com/mil-tokyo/distmljs.
Related papers
- ModelScope-Agent: Building Your Customizable Agent System with
Open-source Large Language Models [74.64651681052628]
We introduce ModelScope-Agent, a customizable agent framework for real-world applications based on open-source LLMs as controllers.
It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs.
A comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation.
arXiv Detail & Related papers (2023-09-02T16:50:30Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$ [118.04625413322827]
$texttt5x$ and $texttseqio$ are open source software libraries for building and training language models.
These libraries have been used to train models with hundreds of billions of parameters on datasets with multiple terabytes of training data.
arXiv Detail & Related papers (2022-03-31T17:12:13Z) - FLHub: a Federated Learning model sharing service [0.7614628596146599]
We propose Federated Learning Hub (FLHub) as a sharing service for machine learning models.
FLHub allows users to upload, download, and contribute the model developed by other developers similarly to GitHub.
We demonstrate that a forked model can finish training faster than the existing model and that learning progressed more quickly for each federated round.
arXiv Detail & Related papers (2022-02-14T06:02:55Z) - Solo-learn: A Library of Self-supervised Methods for Visual
Representation Learning [83.02597612195966]
solo-learn is a library of self-supervised methods for visual representation learning.
Implemented in Python, using Pytorch and Pytorch lightning, the library fits both research and industry needs.
arXiv Detail & Related papers (2021-08-03T22:19:55Z) - WAX-ML: A Python library for machine learning and feedback loops on
streaming data [0.0]
WAX-ML is a research-oriented Python library.
It provides tools to design powerful machine learning algorithms.
It strives to complement JAX with tools dedicated to time series.
arXiv Detail & Related papers (2021-06-11T17:42:02Z) - ThingML+ Augmenting Model-Driven Software Engineering for the Internet
of Things with Machine Learning [4.511923587827301]
We present the current position of the research project ML-Quadrat, which aims to extend the methodology, modeling language and tool support of ThingML.
We argue that in many cases IoT/CPS services involve system components and physical processes, whose behaviors are not well understood in order to be modeled using state machines.
We plan to support two target platforms for code generation regarding Stream Processing and Complex Event Processing, namely Apache SAMOA and Apama.
arXiv Detail & Related papers (2020-09-22T15:45:45Z) - From Things' Modeling Language (ThingML) to Things' Machine Learning
(ThingML2) [4.014524824655106]
We enhance ThingML to support machine learning on the modeling level.
Our DSL allows one to define things, which are in charge of carrying out data analytics.
Our code generators can automatically produce the complete implementation in Java and Python.
arXiv Detail & Related papers (2020-09-22T15:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.