A Low-Cost Neural ODE with Depthwise Separable Convolution for Edge
Domain Adaptation on FPGAs
- URL: http://arxiv.org/abs/2107.12824v1
- Date: Tue, 27 Jul 2021 13:44:13 GMT
- Title: A Low-Cost Neural ODE with Depthwise Separable Convolution for Edge
Domain Adaptation on FPGAs
- Authors: Hiroki Kawakami, Hirohisa Watanabe, Keisuke Sugiura, Hiroki Matsutani
- Abstract summary: ResNet is one of conventional deep neural network models that stack a lot of layers and parameters for a higher accuracy.
In this paper, a combination of Neural ODE and DSC, called dsODENet, is designed and implemented for FPGAs.
The results demonstrate that dsODENet is comparable to or slightly better than our baseline Neural ODE implementation in terms of domain adaptation accuracy.
- Score: 2.620638110026557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although high-performance deep neural networks are in high demand in edge
environments, computation resources are strictly limited in edge devices, and
light-weight neural network techniques, such as Depthwise Separable Convolution
(DSC), have been developed. ResNet is one of conventional deep neural network
models that stack a lot of layers and parameters for a higher accuracy. To
reduce the parameter size of ResNet, by utilizing a similarity to ODE (Ordinary
Differential Equation), Neural ODE repeatedly uses most of weight parameters
instead of having a lot of different parameters. Thus, Neural ODE becomes
significantly small compared to that of ResNet so that it can be implemented in
resource-limited edge devices. In this paper, a combination of Neural ODE and
DSC, called dsODENet, is designed and implemented for FPGAs (Field-Programmable
Gate Arrays). dsODENet is then applied to edge domain adaptation as a practical
use case and evaluated with image classification datasets. It is implemented on
Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy,
training speed, FPGA resource utilization, and speedup rate compared to a
software execution. The results demonstrate that dsODENet is comparable to or
slightly better than our baseline Neural ODE implementation in terms of domain
adaptation accuracy, while the total parameter size without pre- and
post-processing layers is reduced by 54.2% to 79.8%. The FPGA implementation
accelerates the prediction tasks by 27.9 times faster than a software
implementation.
Related papers
- NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions [2.7086888205833968]
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks.
We propose relaxing the boundaries of neurons and mapping entire sub-networks to a single LUT.
We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST.
arXiv Detail & Related papers (2024-02-29T16:10:21Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution [55.50793823060282]
We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
arXiv Detail & Related papers (2022-07-21T07:50:50Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function
Combinational Logic [4.119948826527649]
Field-programmable gate array (FPGA)-based accelerators are gaining traction as a serious contender to replace graphics processing unit/central processing unit-based platforms.
This paper presents NullaNet Tiny, a framework for constructing resource and energy-efficient, ultra-low-latency FPGA-based neural network accelerators.
arXiv Detail & Related papers (2021-04-07T00:16:39Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Accelerating ODE-Based Neural Networks on Low-Cost FPGAs [3.4795226670772745]
ODENet is a deep neural network architecture in which a stacking structure of ResNet is implemented with an ordinary differential equation solver.
It can reduce the number of parameters and strike a balance between accuracy and performance by selecting a proper solver.
It is also possible to improve the accuracy while keeping the same number of parameters on resource-limited edge devices.
arXiv Detail & Related papers (2020-12-31T06:39:22Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z) - A Learning Framework for n-bit Quantized Neural Networks toward FPGAs [20.83904734716565]
This paper proposes a novel learning framework for n-bit QNNs, whose weights are constrained to the power of two.
We also propose a novel QNN structure named n-BQ-NN, which uses shift operation to replace the multiply operation.
Experiments show that our n-BQ-NN with our SVPE can execute 2.9 times faster than with the vector processing element (VPE) in inference.
arXiv Detail & Related papers (2020-04-06T04:21:24Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.