Lightweight compression of neural network feature tensors for
collaborative intelligence
- URL: http://arxiv.org/abs/2105.06002v1
- Date: Wed, 12 May 2021 23:41:35 GMT
- Title: Lightweight compression of neural network feature tensors for
collaborative intelligence
- Authors: Robert A. Cohen, Hyomin Choi, Ivan V. Baji\'c
- Abstract summary: In collaborative intelligence applications, part of a deep neural network (DNN) is deployed on a relatively low-complexity device such as a mobile phone or edge device.
This paper presents a novel lightweight compression technique designed specifically to code the activations of a split DNN layer.
- Score: 32.03465747357384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In collaborative intelligence applications, part of a deep neural network
(DNN) is deployed on a relatively low-complexity device such as a mobile phone
or edge device, and the remainder of the DNN is processed where more computing
resources are available, such as in the cloud. This paper presents a novel
lightweight compression technique designed specifically to code the activations
of a split DNN layer, while having a low complexity suitable for edge devices
and not requiring any retraining. We also present a modified
entropy-constrained quantizer design algorithm optimized for clipped
activations. When applied to popular object-detection and classification DNNs,
we were able to compress the 32-bit floating point activations down to 0.6 to
0.8 bits, while keeping the loss in accuracy to less than 1%. When compared to
HEVC, we found that the lightweight codec consistently provided better
inference accuracy, by up to 1.3%. The performance and simplicity of this
lightweight compression technique makes it an attractive option for coding a
layer's activations in split neural networks for edge/cloud applications.
Related papers
- "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Nonlinear Tensor Ring Network [39.89070144585793]
State-of-the-art deep neural networks (DNNs) have been widely applied for various real-world applications, and achieved significant performance for cognitive problems.
By converting redundant models into compact ones, compression technique appears to be a practical solution to reducing the storage and memory consumption.
In this paper, we develop a nonlinear tensor ring network (NTRN) in which both fullyconnected and convolutional layers are compressed.
arXiv Detail & Related papers (2021-11-12T02:02:55Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Data-Driven Low-Rank Neural Network Compression [8.025818540338518]
We propose a Data-Driven Low-rank (DDLR) method to reduce the number of parameters of pretrained Deep Neural Networks (DNNs)
We show that it is possible to significantly reduce the number of parameters with only a small reduction in classification accuracy.
arXiv Detail & Related papers (2021-07-13T00:10:21Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Lightweight Compression of Intermediate Neural Network Features for
Collaborative Intelligence [32.03465747357384]
In collaborative intelligence applications, part of a deep neural network (DNN) is deployed on a lightweight device such as a mobile phone or edge device.
This paper presents a novel lightweight compression technique designed specifically to quantize and compress the features output by the intermediate layer of a split DNN.
arXiv Detail & Related papers (2021-05-15T00:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.