QuadraLib: A Performant Quadratic Neural Network Library for
Architecture Optimization and Design Exploration
- URL: http://arxiv.org/abs/2204.01701v1
- Date: Fri, 1 Apr 2022 18:06:54 GMT
- Title: QuadraLib: A Performant Quadratic Neural Network Library for
Architecture Optimization and Design Exploration
- Authors: Zirui Xu, Fuxun Yu, Jinjun Xiong, Xiang Chen
- Abstract summary: Quadratic Deep Neuron Networks (QDNNs) show better non-linearity and learning capability than the first-order DNNs.
Our design has good performance regarding prediction accuracy and computation consumption on multiple learning tasks.
- Score: 31.488940932186246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The significant success of Deep Neural Networks (DNNs) is highly promoted by
the multiple sophisticated DNN libraries. On the contrary, although some work
have proved that Quadratic Deep Neuron Networks (QDNNs) show better
non-linearity and learning capability than the first-order DNNs, their neuron
design suffers certain drawbacks from theoretical performance to practical
deployment. In this paper, we first proposed a new QDNN neuron architecture
design, and further developed QuadraLib, a QDNN library to provide architecture
optimization and design exploration for QDNNs. Extensive experiments show that
our design has good performance regarding prediction accuracy and computation
consumption on multiple learning tasks.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods [33.377770671553336]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs)
In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance.
arXiv Detail & Related papers (2024-05-06T09:58:54Z) - Rethinking Residual Connection in Training Large-Scale Spiking Neural
Networks [10.286425749417216]
Spiking Neural Network (SNN) is known as the most famous brain-inspired model.
Non-differentiable spiking mechanism makes it hard to train large-scale SNNs.
arXiv Detail & Related papers (2023-11-09T06:48:29Z) - From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport [32.39176908225668]
We introduce the concept of the non-linearity signature of DNN, the first theoretically sound solution for measuring the non-linearity of deep neural networks.
We provide extensive experimental results that highlight the practical usefulness of the proposed non-linearity signature.
arXiv Detail & Related papers (2023-10-17T17:50:22Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - AutoSNN: Towards Energy-Efficient Spiking Neural Networks [26.288681480713695]
Spiking neural networks (SNNs) mimic information transmission in the brain.
Most previous studies have focused solely on training methods, and the effect of architecture has rarely been studied.
We propose a spike-aware neural architecture search framework called AutoSNN.
arXiv Detail & Related papers (2022-01-30T06:12:59Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - Keys to Accurate Feature Extraction Using Residual Spiking Neural
Networks [1.101002667958165]
Spiking neural networks (SNNs) have become an interesting alternative to conventional artificial neural networks (ANNs)
We present a study on the key components of modern spiking architectures.
We design a spiking version of the successful residual network (ResNet) architecture and test different components and training strategies on it.
arXiv Detail & Related papers (2021-11-10T21:29:19Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.