A Lightweight, Efficient and Explainable-by-Design Convolutional Neural
Network for Internet Traffic Classification
- URL: http://arxiv.org/abs/2202.05535v4
- Date: Mon, 5 Jun 2023 20:52:32 GMT
- Title: A Lightweight, Efficient and Explainable-by-Design Convolutional Neural
Network for Internet Traffic Classification
- Authors: Kevin Fauvel, Fuxing Chen, Dario Rossi
- Abstract summary: This paper introduces a new Lightweight, Efficient and eXplainable-by-design convolutional neural network (LEXNet) for Internet traffic classification.
LEXNet relies on a new residual block (for lightweight and efficiency purposes) and prototype layer (for explainability)
Based on a commercial-grade dataset, our evaluation shows that LEXNet succeeds to maintain the same accuracy as the best performing state-of-the-art neural network.
- Score: 9.365794791156972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic classification, i.e. the identification of the type of applications
flowing in a network, is a strategic task for numerous activities (e.g.,
intrusion detection, routing). This task faces some critical challenges that
current deep learning approaches do not address. The design of current
approaches do not take into consideration the fact that networking hardware
(e.g., routers) often runs with limited computational resources. Further, they
do not meet the need for faithful explainability highlighted by regulatory
bodies. Finally, these traffic classifiers are evaluated on small datasets
which fail to reflect the diversity of applications in real-world settings.
Therefore, this paper introduces a new Lightweight, Efficient and
eXplainable-by-design convolutional neural network (LEXNet) for Internet
traffic classification, which relies on a new residual block (for lightweight
and efficiency purposes) and prototype layer (for explainability). Based on a
commercial-grade dataset, our evaluation shows that LEXNet succeeds to maintain
the same accuracy as the best performing state-of-the-art neural network, while
providing the additional features previously mentioned. Moreover, we illustrate
the explainability feature of our approach, which stems from the communication
of detected application prototypes to the end-user, and we highlight the
faithfulness of LEXNet explanations through a comparison with post hoc methods.
Related papers
- Lens: A Foundation Model for Network Traffic [19.3652490585798]
Lens is a foundation model for network traffic that leverages the T5 architecture to learn the pre-trained representations from large-scale unlabeled data.
We design a novel loss that combines three distinct tasks: Masked Span Prediction (MSP), Packet Order Prediction (POP), and Homologous Traffic Prediction (HTP)
arXiv Detail & Related papers (2024-02-06T02:45:13Z) - Non-Separable Multi-Dimensional Network Flows for Visual Computing [62.50191141358778]
We propose a novel formalism for non-separable multi-dimensional network flows.
Since the flow is defined on a per-dimension basis, the maximizing flow automatically chooses the best matching feature dimensions.
As a proof of concept, we apply our formalism to the multi-object tracking problem and demonstrate that our approach outperforms scalar formulations on the MOT16 benchmark in terms of robustness to noise.
arXiv Detail & Related papers (2023-05-15T13:21:44Z) - High Efficiency Pedestrian Crossing Prediction [0.0]
State-of-the-art methods in predicting pedestrian crossing intention often rely on multiple streams of information as inputs.
We introduce a network with only frames of pedestrians as the input.
Experiments validate that our model consistently delivers outstanding performances.
arXiv Detail & Related papers (2022-04-04T21:37:57Z) - ZippyPoint: Fast Interest Point Detection, Description, and Matching
through Mixed Precision Discretization [71.91942002659795]
We investigate and adapt network quantization techniques to accelerate inference and enable its use on compute limited platforms.
ZippyPoint, our efficient quantized network with binary descriptors, improves the network runtime speed, the descriptor matching speed, and the 3D model size.
These improvements come at a minor performance degradation as evaluated on the tasks of homography estimation, visual localization, and map-free visual relocalization.
arXiv Detail & Related papers (2022-03-07T18:59:03Z) - Data-Driven Traffic Assignment: A Novel Approach for Learning Traffic
Flow Patterns Using a Graph Convolutional Neural Network [1.3706331473063877]
We present a novel data-driven approach of learning traffic flow patterns of a transportation network.
We develop a neural network-based framework known as Graph Convolutional Neural Network (GCNN) to solve it.
When the training of the model is complete, it can instantly determine the traffic flows of a large-scale network.
arXiv Detail & Related papers (2022-02-21T19:45:15Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Unwrapping The Black Box of Deep ReLU Networks: Interpretability,
Diagnostics, and Simplification [9.166160560427919]
Deep neural networks (DNNs) have achieved great success in learning complex patterns with strong predictive power.
They are often thought of as "black box" models without a sufficient level of transparency and interpretability.
This paper aims to unwrap the black box of deep ReLU networks through local linear representation.
arXiv Detail & Related papers (2020-11-08T18:09:36Z) - Prior knowledge distillation based on financial time series [0.8756822885568589]
We propose to use neural networks to represent indicators and train a large network constructed of smaller networks as feature layers.
In numerical experiments, we find that our algorithm is faster and more accurate than traditional methods on real financial datasets.
arXiv Detail & Related papers (2020-06-16T15:26:06Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.