Optimal In-Network Distribution of Learning Functions for a Secure-by-Design Programmable Data Plane of Next-Generation Networks
- URL: http://arxiv.org/abs/2411.18384v2
- Date: Tue, 29 Apr 2025 16:33:52 GMT
- Title: Optimal In-Network Distribution of Learning Functions for a Secure-by-Design Programmable Data Plane of Next-Generation Networks
- Authors: Mattia Giovanni Spina, Edoardo Scalzo, Floriano De Rango, Francesca Guerriero, Antonio Iera,
- Abstract summary: This paper focuses on the deployment of in-network learning models with the aim of implementing fully distributed intrusion detection systems (IDS) or intrusion prevention systems (IPS)<n>A model is proposed for the optimal distribution of the IDS/IPS workload among data plane devices with the aim of ensuring complete network security without excessively burdening the normal operations of the devices.
- Score: 2.563180814294141
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rise of programmable data plane (PDP) and in-network computing (INC) paradigms paves the way for the development of network devices (switches, network interface cards, etc.) capable of performing advanced processing tasks. This allows running various types of algorithms, including machine learning, within the network itself to support user and network services. In particular, this paper delves into the deployment of in-network learning models with the aim of implementing fully distributed intrusion detection systems (IDS) or intrusion prevention systems (IPS). Specifically, a model is proposed for the optimal distribution of the IDS/IPS workload among data plane devices with the aim of ensuring complete network security without excessively burdening the normal operations of the devices. Furthermore, a meta-heuristic approach is proposed to reduce the long computation time required by the exact solution provided by the mathematical model and its performance is evaluated. The analysis conducted and the results obtained demonstrate the enormous potential of the proposed new approach for the creation of intelligent data planes that act effectively and autonomously as the first line of defense against cyber attacks, with minimal additional workload on the network devices involved.
Related papers
- Distributing Intelligence in 6G Programmable Data Planes for Effective In-Network Deployment of an Active Intrusion Detection System [2.563180814294141]
This research aims to propose a disruptive paradigm in which devices in a typical data plane of a future programmable network have %classification and anomaly detection capabilities and cooperate in a fully distributed fashion to act as an ML-enabled Active Intrusion Detection System "embedded" into the network.
The reported proof-of-concept experiments demonstrate that the proposed paradigm allows working effectively and with a good level of precision while occupying overall less CPU and RAM resources of the devices involved.
arXiv Detail & Related papers (2024-10-31T15:14:15Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Enhancing Security Testing Software for Systems that Cannot be Subjected to the Risks of Penetration Testing Through the Incorporation of Multi-threading and and Other Capabilities [0.0]
SONARR is a system vulnerability analysis tool for complex mission critical systems.
This paper describes and analyzes the performance of a multi-threaded SONARR algorithm and other enhancements.
arXiv Detail & Related papers (2024-09-17T05:09:10Z) - RACH Traffic Prediction in Massive Machine Type Communications [5.416701003120508]
This paper presents a machine learning-based framework tailored for forecasting bursty traffic in ALOHA networks.
We develop a new low-complexity online prediction algorithm that updates the states of the LSTM network by leveraging frequently collected data from the mMTC network.
We evaluate the performance of the proposed framework in a network with a single base station and thousands of devices organized into groups with distinct traffic-generating characteristics.
arXiv Detail & Related papers (2024-05-08T17:28:07Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Towards Scalable Wireless Federated Learning: Challenges and Solutions [40.68297639420033]
federated learning (FL) emerges as an effective distributed machine learning framework.
We discuss the challenges and solutions of achieving scalable wireless FL from the perspectives of both network design and resource orchestration.
arXiv Detail & Related papers (2023-10-08T08:55:03Z) - Detection of DDoS Attacks in Software Defined Networking Using Machine
Learning Models [0.6193838300896449]
This paper investigates the effectiveness of machine learning algorithms to detect distributed denial-of-service (DDoS) attacks in software-defined networking (SDN) environments.
The results indicate that ML-based detection is a more accurate and effective method for identifying DDoS attacks in SDN.
arXiv Detail & Related papers (2023-03-11T22:56:36Z) - Robust, Deep, and Reinforcement Learning for Management of Communication
and Power Networks [6.09170287691728]
The present thesis first develops principled methods to make generic machine learning models robust against distributional uncertainties and adversarial data.
We then build on this robust framework to design robust semi-supervised learning over graph methods.
The second part of this thesis aspires to fully unleash the potential of next-generation wired and wireless networks.
arXiv Detail & Related papers (2022-02-08T05:49:06Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Towards AIOps in Edge Computing Environments [60.27785717687999]
This paper describes the system design of an AIOps platform which is applicable in heterogeneous, distributed environments.
It is feasible to collect metrics with a high frequency and simultaneously run specific anomaly detection algorithms directly on edge devices.
arXiv Detail & Related papers (2021-02-12T09:33:00Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Applying Graph-based Deep Learning To Realistic Network Scenarios [5.453745629140304]
This paper presents a new Graph-based deep learning model able to estimate accurately the per-path mean delay in networks.
The proposed model can generalize successfully over topologies, routing configurations, queue scheduling policies and traffic matrices unseen during the training phase.
arXiv Detail & Related papers (2020-10-13T20:58:59Z) - DSDNet: Deep Structured self-Driving Network [92.9456652486422]
We propose the Deep Structured self-Driving Network (DSDNet), which performs object detection, motion prediction, and motion planning with a single neural network.
We develop a deep structured energy based model which considers the interactions between actors and produces socially consistent multimodal future predictions.
arXiv Detail & Related papers (2020-08-13T17:54:06Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.