Zero Trust for Cyber Resilience
- URL: http://arxiv.org/abs/2312.02882v1
- Date: Tue, 5 Dec 2023 16:53:20 GMT
- Title: Zero Trust for Cyber Resilience
- Authors: Yunfei Ge, Quanyan Zhu,
- Abstract summary: This chapter draws attention to the cyber resilience within the zero-trust model.
We introduce the evolution from traditional perimeter-based security to zero trust and discuss their difference.
- Score: 13.343937277604892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increased connectivity and potential insider threats make traditional network defense vulnerable. Instead of assuming that everything behind the security perimeter is safe, the zero-trust security model verifies every incoming request before granting access. This chapter draws attention to the cyber resilience within the zero-trust model. We introduce the evolution from traditional perimeter-based security to zero trust and discuss their difference. Two key elements of the zero-trust engine are trust evaluation (TE) and policy engine (PE). We introduce the design of the two components and discuss how their interplay would contribute to cyber resilience. Dynamic game theory and learning are applied as quantitative approaches to achieve automated zero-trust cyber resilience. Several case studies and implementations are introduced to illustrate the benefits of such a security model.
Related papers
- Authentication and identity management based on zero trust security model in micro-cloud environment [0.0]
The Zero Trust framework can better track and block external attackers while limiting security breaches resulting from insider attacks in the cloud paradigm.
This paper focuses on authentication mechanisms, calculation of trust score, and generation of policies in order to establish required access control to resources.
arXiv Detail & Related papers (2024-10-29T09:06:13Z) - GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction [53.2306792009435]
We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
arXiv Detail & Related papers (2024-05-20T14:43:46Z) - Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks [2.28438857884398]
Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges.
This study proposes an innovative security framework inspired by Control-Flow (CFA) mechanisms, traditionally used in cybersecurity.
We authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference.
arXiv Detail & Related papers (2024-03-15T04:03:34Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - Zero Trust: Applications, Challenges, and Opportunities [0.0]
This survey comprehensively explores the theoretical foundations, practical implementations, applications, challenges, and future trends of Zero Trust.
We highlight the relevance of Zero Trust in securing cloud environments, facilitating remote work, and protecting the Internet of Things (IoT) ecosystem.
Integrating Zero Trust with emerging technologies like AI and machine learning augments its efficacy, promising a dynamic and responsive security landscape.
arXiv Detail & Related papers (2023-09-07T09:23:13Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Scenario-Agnostic Zero-Trust Defense with Explainable Threshold Policy:
A Meta-Learning Approach [20.11993437283895]
We propose a scenario-agnostic zero-trust defense based on Partially Observable Markov Decision Processes (POMDP) and first-order Meta-Learning.
We use case studies and real-world attacks to corroborate the results.
arXiv Detail & Related papers (2023-03-06T18:35:34Z) - Resilient Risk based Adaptive Authentication and Authorization (RAD-AA)
Framework [3.9858496473361402]
We discuss the design considerations for a secure and resilient authentication and authorization framework capable of self-adapting based on the risk scores and trust profiles.
We call this framework as Resilient Risk based Adaptive Authentication and Authorization (RAD-AA)
arXiv Detail & Related papers (2022-08-04T11:44:29Z) - Adversarial Attacks on ML Defense Models Competition [82.37504118766452]
The TSAIL group at Tsinghua University and the Alibaba Security group organized this competition.
The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness.
arXiv Detail & Related papers (2021-10-15T12:12:41Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.