Neural Network-based Quantization for Network Automation
- URL: http://arxiv.org/abs/2103.04764v1
- Date: Thu, 4 Mar 2021 11:41:19 GMT
- Title: Neural Network-based Quantization for Network Automation
- Authors: Marton Kajo, Stephen S. Mwanje, Benedek Schultz, Georg Carle
- Abstract summary: We introduce Bounding Sphere Quantization (BSQ) algorithm, a modification of the k-Means algorithm, that was shown to create better quantizations for certain network management use-cases.
BSQ required a significantly longer time to train than k-Means, a challenge which can be overcome with a neural network-based implementation.
We present such an implementation of BSQ that utilizes state-of-the-art deep learning tools to achieve a competitive training speed.
- Score: 0.7034976835586089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning methods have been adopted in mobile networks, especially for
network management automation where they provide means for advanced machine
cognition. Deep learning methods utilize cutting-edge hardware and software
tools, allowing complex cognitive algorithms to be developed. In a recent
paper, we introduced the Bounding Sphere Quantization (BSQ) algorithm, a
modification of the k-Means algorithm, that was shown to create better
quantizations for certain network management use-cases, such as anomaly
detection. However, BSQ required a significantly longer time to train than
k-Means, a challenge which can be overcome with a neural network-based
implementation. In this paper, we present such an implementation of BSQ that
utilizes state-of-the-art deep learning tools to achieve a competitive training
speed.
Related papers
- Constraint Guided Model Quantization of Neural Networks [0.0]
Constraint Guided Model Quantization (CGMQ) is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network.
It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms.
arXiv Detail & Related papers (2024-09-30T09:41:16Z) - CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Deep Learning Algorithms Used in Intrusion Detection Systems -- A Review [0.0]
This review paper studies recent advancements in the application of deep learning techniques, including CNN, Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), autoencoders (AE), Multi-Layer Perceptrons (MLP), Self-Normalizing Networks (SNN) and hybrid models, within network intrusion detection systems.
arXiv Detail & Related papers (2024-02-26T20:57:35Z) - Quantization-aware Neural Architectural Search for Intrusion Detection [5.010685611319813]
We present a design methodology that automatically trains and evolves quantized neural network (NN) models that are a thousand times smaller than state-of-the-art NNs.
The number of LUTs utilized by this network when deployed to an FPGA is between 2.3x and 8.5x smaller with performance comparable to prior work.
arXiv Detail & Related papers (2023-11-07T18:35:29Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Uncertainty Quantification and Resource-Demanding Computer Vision
Applications of Deep Learning [5.130440339897478]
Bringing deep neural networks (DNNs) into safety critical applications requires a thorough treatment of the model's uncertainties.
In this article, we survey methods that we developed to teach DNNs to be uncertain when they encounter new object classes.
We also present training methods to learn from only a few labels with help of uncertainty quantification.
arXiv Detail & Related papers (2022-05-30T08:31:03Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.