KHNNs: hypercomplex neural networks computations via Keras using TensorFlow and PyTorch
- URL: http://arxiv.org/abs/2407.00452v1
- Date: Sat, 29 Jun 2024 14:36:37 GMT
- Title: KHNNs: hypercomplex neural networks computations via Keras using TensorFlow and PyTorch
- Authors: Agnieszka Niemczynowicz, Radosław Antoni Kycia,
- Abstract summary: We propose a library applications integrated with Keras that can do computations within Dense and PyTorch.
It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks used in computations with more advanced algebras than real numbers perform better in some applications. However, there is no general framework for constructing hypercomplex neural networks. We propose a library integrated with Keras that can do computations within TensorFlow and PyTorch. It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
Related papers
- RoseNNa: A performant, portable library for neural network inference
with application to computational fluid dynamics [0.0]
We present the roseNNa library, which bridges the gap between neural network inference and CFD.
RoseNNa is a non-invasive, lightweight (1000 lines) tool for neural network inference.
arXiv Detail & Related papers (2023-07-30T21:11:55Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Fast Finite Width Neural Tangent Kernel [47.57136433797996]
The neural network Jacobian has emerged as a central object of study in deep learning.
The finite width NTK is notoriously expensive to compute.
We propose two novel algorithms that change the exponent of the compute and memory requirements of the finite width NTK.
arXiv Detail & Related papers (2022-06-17T12:18:22Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Vector Neurons: A General Framework for SO(3)-Equivariant Networks [32.81671803104126]
In this paper, we introduce a general framework built on top of what we call Vector Neuron representations.
Our vector neurons enable a simple mapping of SO(3) actions to latent spaces.
We also show for the first time a rotation equivariant reconstruction network.
arXiv Detail & Related papers (2021-04-25T18:48:15Z) - TensorX: Extensible API for Neural Network Model Design and Deployment [0.0]
TensorFlowX is a Python library for prototyping, design, and deployment of complex neural network models in computation.
A special emphasis is put on ease of use, performance, and API consistency.
arXiv Detail & Related papers (2020-12-29T00:15:38Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - Deep Learning in Memristive Nanowire Networks [0.0]
A new hardware architecture, dubbed the MN3 (Memristive Nanowire Neural Network), was recently described as an efficient architecture for simulating very wide, sparse neural network layers.
We show that the MN3 is capable of performing composition, gradient propagation, and weight updates, which together allow it to function as a deep neural network.
arXiv Detail & Related papers (2020-03-03T20:11:33Z) - On the distance between two neural networks and the stability of
learning [59.62047284234815]
This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions.
The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks.
arXiv Detail & Related papers (2020-02-09T19:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.