TensorX: Extensible API for Neural Network Model Design and Deployment
- URL: http://arxiv.org/abs/2012.14539v2
- Date: Sat, 2 Jan 2021 23:35:07 GMT
- Title: TensorX: Extensible API for Neural Network Model Design and Deployment
- Authors: Davide Nunes and Luis Antunes
- Abstract summary: TensorFlowX is a Python library for prototyping, design, and deployment of complex neural network models in computation.
A special emphasis is put on ease of use, performance, and API consistency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: TensorX is a Python library for prototyping, design, and deployment of
complex neural network models in TensorFlow. A special emphasis is put on ease
of use, performance, and API consistency. It aims to make available high-level
components like neural network layers that are, in effect, stateful functions,
easy to compose and reuse. Its architecture allows for the expression of
patterns commonly found when building neural network models either on research
or industrial settings. Incorporating ideas from several other deep learning
libraries, it makes it easy to use components commonly found in
state-of-the-art models. The library design mixes functional dataflow
computation graphs with object-oriented neural network building blocks. TensorX
combines the dynamic nature of Python with the high-performance GPU-enabled
operations of TensorFlow.
This library has minimal core dependencies (TensorFlow and NumPy) and is
distributed under Apache License 2.0 licence, encouraging its use in both an
academic and commercial settings. Full documentation, source code, and binaries
can be found in https://tensorx.org/.
Related papers
- Comgra: A Tool for Analyzing and Debugging Neural Networks [35.89730807984949]
We introduce comgra, an open source python library for use with PyTorch.
Comgra extracts data about the internal activations of a model and organizes it in a GUI.
It can show both summary statistics and individual data points, compare early and late stages of training, focus on individual samples of interest, and visualize the flow of the gradient through the network.
arXiv Detail & Related papers (2024-07-31T14:57:23Z) - KHNNs: hypercomplex neural networks computations via Keras using TensorFlow and PyTorch [0.0]
We propose a library applications integrated with Keras that can do computations within Dense and PyTorch.
It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
arXiv Detail & Related papers (2024-06-29T14:36:37Z) - KerasCV and KerasNLP: Vision and Language Power-Ups [9.395199188271254]
KerasCV and KerasNLP are extensions of the Keras API for Computer Vision and Natural Language Processing.
These domain packages are designed to enable fast experimentation, with a focus on ease-of-use and performance.
The libraries are fully open-source (Apache 2.0 license) and available on GitHub.
arXiv Detail & Related papers (2024-05-30T16:58:34Z) - TensorKrowch: Smooth integration of tensor networks in machine learning [46.0920431279359]
We introduceKrowch, an open source Python library built on top of PyTorch.
Krowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models.
arXiv Detail & Related papers (2023-06-14T15:55:19Z) - torchgfn: A PyTorch GFlowNet library [56.071033896777784]
torchgfn is a PyTorch library that aims to address this need.
It provides users with a simple API for environments and useful abstractions for samplers and losses.
arXiv Detail & Related papers (2023-05-24T00:20:59Z) - DeepLab2: A TensorFlow Library for Deep Labeling [118.95446843615049]
DeepLab2 is a library for deep labeling for general dense pixel prediction problems in computer vision.
DeepLab2 includes all our recently developed DeepLab model variants with pretrained checkpoints as well as model training and evaluation code.
To showcase the effectiveness of DeepLab2, our Panoptic-DeepLab employing Axial-SWideRNet as network backbone achieves 68.0% PQ or 83.5% mIoU on Cityscaspes validation set.
arXiv Detail & Related papers (2021-06-17T18:04:53Z) - TensorFlow ManOpt: a library for optimization on Riemannian manifolds [0.3655021726150367]
The adoption of neural networks and deep learning in non-Euclidean domains has been hindered until recently by the lack of scalable and efficient learning frameworks.
We attempt to bridge this gap by proposing ManOpt, a Python library for optimization on Riemannian in terms of machine learning models.
The library is designed with the aim for a seamless integration with the ecosystem, targeting not only research, but also streamlining production machine learning pipelines.
arXiv Detail & Related papers (2021-05-27T10:42:09Z) - Efficient Graph Deep Learning in TensorFlow with tf_geometric [53.237754811019464]
We introduce tf_geometric, an efficient and friendly library for graph deep learning.
tf_geometric provides kernel libraries for building Graph Neural Networks (GNNs) as well as implementations of popular GNNs.
The kernel libraries consist of infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc.
arXiv Detail & Related papers (2021-01-27T17:16:36Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - fastai: A Layered API for Deep Learning [1.7223564681760164]
fastai is a deep learning library which provides practitioners with high-level components.
It provides researchers with low-level components that can be mixed and matched to build new approaches.
arXiv Detail & Related papers (2020-02-11T21:16:48Z) - On the distance between two neural networks and the stability of
learning [59.62047284234815]
This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions.
The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks.
arXiv Detail & Related papers (2020-02-09T19:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.