Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific
Tensor Decomposition
- URL: http://arxiv.org/abs/2306.05021v2
- Date: Thu, 22 Jun 2023 15:44:01 GMT
- Title: Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific
Tensor Decomposition
- Authors: Zhewen Yu, Christos-Savvas Bouganis
- Abstract summary: We propose a framework for mapping CNNs onto FPGAs based on a novel tensor decomposition method called Mixed-TD.
The proposed method applies layer-specific Singular Value Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed manner, achieving 1.73x to 10.29x throughput per DSP to state-of-the-art CNNs.
- Score: 7.221206118679026
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Neural Network designs are quite diverse, from VGG-style to ResNet-style, and
from Convolutional Neural Networks to Transformers. Towards the design of
efficient accelerators, many works have adopted a dataflow-based, inter-layer
pipelined architecture, with a customised hardware towards each layer,
achieving ultra high throughput and low latency. The deployment of neural
networks to such dataflow architecture accelerators is usually hindered by the
available on-chip memory as it is desirable to preload the weights of neural
networks on-chip to maximise the system performance. To address this, networks
are usually compressed before the deployment through methods such as pruning,
quantization and tensor decomposition. In this paper, a framework for mapping
CNNs onto FPGAs based on a novel tensor decomposition method called Mixed-TD is
proposed. The proposed method applies layer-specific Singular Value
Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed
manner, achieving 1.73x to 10.29x throughput per DSP to state-of-the-art CNNs.
Our work is open-sourced: https://github.com/Yu-Zhewen/Mixed-TD
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - WavPool: A New Block for Deep Neural Networks [2.2311710049695446]
We introduce a new, wavelet-transform-based network architecture that we call the multi-resolution perceptron.
By adding a pooling layer, we create a new network block, the WavPool.
WavPool outperforms a similar multilayer perceptron while using fewer parameters, and outperforms a comparable convolutional neural network by 10% on relative accuracy on CIFAR-10.
arXiv Detail & Related papers (2023-06-14T20:35:01Z) - Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable
Spiking Neural Network on FPGA [0.31498833540989407]
ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients.
This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware.
arXiv Detail & Related papers (2023-05-31T00:34:15Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Nonlinear Tensor Ring Network [39.89070144585793]
State-of-the-art deep neural networks (DNNs) have been widely applied for various real-world applications, and achieved significant performance for cognitive problems.
By converting redundant models into compact ones, compression technique appears to be a practical solution to reducing the storage and memory consumption.
In this paper, we develop a nonlinear tensor ring network (NTRN) in which both fullyconnected and convolutional layers are compressed.
arXiv Detail & Related papers (2021-11-12T02:02:55Z) - Convolutional Neural Network Compression through Generalized Kronecker
Product Decomposition [2.4240083226965115]
We compress layers by generalizing the Kronecker Product Decomposition to apply to multidimensionals, leading to the Generalized Kronecker Product Decomposition(GKPD)
Our approach yields a plug-and-play module that can be used as a drop-in replacement for any convolutional layer.
arXiv Detail & Related papers (2021-09-29T20:45:08Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.