Learning-driven Zero Trust in Distributed Computing Continuum Systems
- URL: http://arxiv.org/abs/2311.17447v1
- Date: Wed, 29 Nov 2023 08:41:06 GMT
- Title: Learning-driven Zero Trust in Distributed Computing Continuum Systems
- Authors: Ilir Murturi, Praveen Kumar Donta, Victor Casamayor Pujol, Andrea
Morichetta, and Schahram Dustdar
- Abstract summary: Converging Zero Trust (ZT) with learning techniques can solve various operational and security challenges in Distributed Computing Continuum Systems.
We present a novel learning-driven ZT conceptual architecture designed for DCCS.
We show how the learning process detects and blocks the requests, enhances resource access control, and reduces network overheads.
- Score: 5.5676731834895765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Converging Zero Trust (ZT) with learning techniques can solve various
operational and security challenges in Distributed Computing Continuum Systems
(DCCS). Implementing centralized ZT architecture is seen as unsuitable for the
computing continuum (e.g., computing entities with limited connectivity and
visibility, etc.). At the same time, implementing decentralized ZT in the
computing continuum requires understanding infrastructure limitations and novel
approaches to enhance resource access management decisions. To overcome such
challenges, we present a novel learning-driven ZT conceptual architecture
designed for DCCS. We aim to enhance ZT architecture service quality by
incorporating lightweight learning strategies such as Representation Learning
(ReL) and distributing ZT components across the computing continuum. The ReL
helps to improve the decision-making process by predicting threats or untrusted
requests. Through an illustrative example, we show how the learning process
detects and blocks the requests, enhances resource access control, and reduces
network and computation overheads. Lastly, we discuss the conceptual
architecture, processes, and provide a research agenda.
Related papers
- Reinforcement Learning for Adaptive Resource Scheduling in Complex System Environments [8.315191578007857]
This study presents a novel computer system performance optimization and adaptive workload management scheduling algorithm based on Q-learning.
By contrast, Q-learning, a reinforcement learning algorithm, continuously learns from system state changes, enabling dynamic scheduling and resource optimization.
This research provides a foundation for the integration of AI-driven adaptive scheduling in future large-scale systems, offering a scalable, intelligent solution to enhance system performance, reduce operating costs, and support sustainable energy consumption.
arXiv Detail & Related papers (2024-11-08T05:58:09Z) - A Survey on Integrated Sensing, Communication, and Computation [57.6762830152638]
The forthcoming generation of wireless technology, 6G, aims to usher in an era of ubiquitous intelligent services.
The performance of these modules is interdependent, creating a resource competition for time, energy, and bandwidth.
Existing techniques like integrated communication and computation (ICC), integrated sensing and computation (ISC), and integrated sensing and communication (ISAC) have made partial strides in addressing this challenge.
arXiv Detail & Related papers (2024-08-15T11:01:35Z) - Structural Knowledge-Driven Meta-Learning for Task Offloading in
Vehicular Networks with Integrated Communications, Sensing and Computing [21.50450449083369]
Task offloading is a potential solution to satisfy the strict requirements of latencysensitive vehicular applications due to the limited onboard computing resources.
We propose a creative structural knowledge-driven meta-learning (SKDML) method, involving both the model-based AM algorithm and neural networks.
arXiv Detail & Related papers (2024-02-25T03:31:59Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Causal Semantic Communication for Digital Twins: A Generalizable
Imitation Learning Approach [74.25870052841226]
A digital twin (DT) leverages a virtual representation of the physical world, along with communication (e.g., 6G), computing, and artificial intelligence (AI) technologies to enable many connected intelligence services.
Wireless systems can exploit the paradigm of semantic communication (SC) for facilitating informed decision-making under strict communication constraints.
A novel framework called causal semantic communication (CSC) is proposed for DT-based wireless systems.
arXiv Detail & Related papers (2023-04-25T00:15:00Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Digital Twin Virtualization with Machine Learning for IoT and Beyond 5G
Networks: Research Directions for Security and Optimal Control [3.1798318618973362]
Digital twin (DT) technologies have emerged as a solution for real-time data-driven modeling of cyber physical systems.
We establish a conceptual layered architecture for a DT framework with decentralized implementation on cloud computing.
We discuss the significance of DT in lowering the risk of development and deployment of innovative technologies on existing system.
arXiv Detail & Related papers (2022-04-05T03:04:02Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.