Zero Trust Implementation in the Emerging Technologies Era: Survey
- URL: http://arxiv.org/abs/2401.09575v1
- Date: Wed, 17 Jan 2024 19:56:01 GMT
- Title: Zero Trust Implementation in the Emerging Technologies Era: Survey
- Authors: Abraham Itzhak Weinberg, Kelly Cohen,
- Abstract summary: This paper presents a comprehensive analysis of the shift from the traditional perimeter model of security to the Zero Trust (ZT) framework.
It outlines the differences between ZT policies and legacy security policies, along with the significant events that have impacted the evolution of ZT.
The paper explores the potential impacts of emerging technologies, such as Artificial Intelligence (AI) and quantum computing, on the policy and implementation of ZT.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a comprehensive analysis of the shift from the traditional perimeter model of security to the Zero Trust (ZT) framework, emphasizing the key points in the transition and the practical application of ZT. It outlines the differences between ZT policies and legacy security policies, along with the significant events that have impacted the evolution of ZT. Additionally, the paper explores the potential impacts of emerging technologies, such as Artificial Intelligence (AI) and quantum computing, on the policy and implementation of ZT. The study thoroughly examines how AI can enhance ZT by utilizing Machine Learning (ML) algorithms to analyze patterns, detect anomalies, and predict threats, thereby improving real-time decision-making processes. Furthermore, the paper demonstrates how a chaos theory-based approach, in conjunction with other technologies like eXtended Detection and Response (XDR), can effectively mitigate cyberattacks. As quantum computing presents new challenges to ZT and cybersecurity as a whole, the paper delves into the intricacies of ZT migration, automation, and orchestration, addressing the complexities associated with these aspects. Finally, the paper provides a best practice approach for the seamless implementation of ZT in organizations, laying out the proposed guidelines to facilitate organizations in their transition towards a more secure ZT model. The study aims to support organizations in successfully implementing ZT and enhancing their cybersecurity measures.
Related papers
- The Evolution of Zero Trust Architecture (ZTA) from Concept to Implementation [0.0]
Zero Trust Architecture (ZTA) is one of the paradigm changes in cybersecurity.
This article studies the core concepts of ZTA, its beginning, a few use cases and future trends.
ZTA is expected to strengthen cloud environments, education, work environments (including from home) while controlling other risks like lateral movement and insider threats.
arXiv Detail & Related papers (2025-04-16T11:26:54Z) - Enterprise-Grade Security for the Model Context Protocol (MCP): Frameworks and Mitigation Strategies [0.0]
The Model Context Protocol (MCP) provides a standardized framework for artificial intelligence (AI) systems to interact with external data sources and tools in real-time.
This paper builds upon foundational research into MCP architecture and preliminary security assessments to deliver enterprise-grade mitigation frameworks.
arXiv Detail & Related papers (2025-04-11T15:25:58Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - MITRE ATT&CK Applications in Cybersecurity and The Way Forward [18.339713576170396]
The MITRE ATT&CK framework is a widely adopted tool for enhancing cybersecurity, supporting threat intelligence, incident response, attack modeling, and vulnerability prioritization.
This paper synthesizes research on its application across these domains by analyzing 417 peer-reviewed publications.
We identify commonly used adversarial tactics, techniques, and procedures (TTPs) and examine the integration of natural language processing (NLP) and machine learning (ML) with ATT&CK to improve threat detection and response.
arXiv Detail & Related papers (2025-02-15T15:01:04Z) - Zero Trust Architecture: A Systematic Literature Review [0.0]
ZTA operates on the principle of "never trust, always verify"
This study applies the PRISMA framework to analyze 10 years of research on ZTA.
arXiv Detail & Related papers (2025-02-07T13:11:15Z) - Enhancing Enterprise Security with Zero Trust Architecture [0.0]
Zero Trust Architecture (ZTA) represents a transformative approach to modern cybersecurity.
ZTA shifts the security paradigm by assuming that no user, device, or system can be trusted by default.
This paper explores the key components of ZTA, such as identity and access management (IAM), micro-segmentation, continuous monitoring, and behavioral analytics.
arXiv Detail & Related papers (2024-10-23T21:53:16Z) - An Adaptive End-to-End IoT Security Framework Using Explainable AI and LLMs [1.9662978733004601]
This paper presents an innovative framework for real-time IoT attack detection and response that leverages Machine Learning (ML), Explainable AI (XAI), and Large Language Models (LLM)
Our end-to-end framework not only facilitates a seamless transition from model development to deployment but also represents a real-world application capability that is often lacking in existing research.
arXiv Detail & Related papers (2024-09-20T03:09:23Z) - Integrative Approaches in Cybersecurity and AI [0.0]
We identify key trends, challenges, and future directions that hold the potential to revolutionize the way organizations protect, analyze, and leverage their data.
Our findings highlight the necessity of cross-disciplinary strategies that incorporate AI-driven automation, real-time threat detection, and advanced data analytics to build more resilient and adaptive security architectures.
arXiv Detail & Related papers (2024-08-12T01:37:06Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory [0.0]
We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER)
SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation.
arXiv Detail & Related papers (2024-04-07T07:05:59Z) - Noise Contrastive Estimation-based Matching Framework for Low-Resource
Security Attack Pattern Recognition [49.536368818512116]
Tactics, Techniques and Procedures (TTPs) represent sophisticated attack patterns in the cybersecurity domain.
We formulate the problem in a different learning paradigm, where the assignment of a text to a TTP label is decided by the direct semantic similarity between the two.
We propose a neural matching architecture with an effective sampling-based learn-to-compare mechanism.
arXiv Detail & Related papers (2024-01-18T19:02:00Z) - Exchange-of-Thought: Enhancing Large Language Model Capabilities through
Cross-Model Communication [76.04373033082948]
Large Language Models (LLMs) have recently made significant strides in complex reasoning tasks through the Chain-of-Thought technique.
We propose Exchange-of-Thought (EoT), a novel framework that enables cross-model communication during problem-solving.
arXiv Detail & Related papers (2023-12-04T11:53:56Z) - Learning-driven Zero Trust in Distributed Computing Continuum Systems [5.5676731834895765]
Converging Zero Trust (ZT) with learning techniques can solve various operational and security challenges in Distributed Computing Continuum Systems.
We present a novel learning-driven ZT conceptual architecture designed for DCCS.
We show how the learning process detects and blocks the requests, enhances resource access control, and reduces network overheads.
arXiv Detail & Related papers (2023-11-29T08:41:06Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.