Scenario-Agnostic Zero-Trust Defense with Explainable Threshold Policy:
A Meta-Learning Approach
- URL: http://arxiv.org/abs/2303.03349v1
- Date: Mon, 6 Mar 2023 18:35:34 GMT
- Title: Scenario-Agnostic Zero-Trust Defense with Explainable Threshold Policy:
A Meta-Learning Approach
- Authors: Yunfei Ge, Tao Li, and Quanyan Zhu
- Abstract summary: We propose a scenario-agnostic zero-trust defense based on Partially Observable Markov Decision Processes (POMDP) and first-order Meta-Learning.
We use case studies and real-world attacks to corroborate the results.
- Score: 20.11993437283895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing connectivity and intricate remote access environment have made
traditional perimeter-based network defense vulnerable. Zero trust becomes a
promising approach to provide defense policies based on agent-centric trust
evaluation. However, the limited observations of the agent's trace bring
information asymmetry in the decision-making. To facilitate the human
understanding of the policy and the technology adoption, one needs to create a
zero-trust defense that is explainable to humans and adaptable to different
attack scenarios. To this end, we propose a scenario-agnostic zero-trust
defense based on Partially Observable Markov Decision Processes (POMDP) and
first-order Meta-Learning using only a handful of sample scenarios. The
framework leads to an explainable and generalizable trust-threshold defense
policy. To address the distribution shift between empirical security datasets
and reality, we extend the model to a robust zero-trust defense minimizing the
worst-case loss. We use case studies and real-world attacks to corroborate the
results.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - Optimal Zero-Shot Detector for Multi-Armed Attacks [30.906457338347447]
This paper explores a scenario in which a malicious actor employs a multi-armed attack strategy to manipulate data samples.
Our central objective is to protect the data by detecting any alterations to the input.
We derive an innovative information-theoretic defense approach that optimally aggregates the decisions made by these detectors.
arXiv Detail & Related papers (2024-02-24T13:08:39Z) - Zero Trust for Cyber Resilience [13.343937277604892]
This chapter draws attention to the cyber resilience within the zero-trust model.
We introduce the evolution from traditional perimeter-based security to zero trust and discuss their difference.
arXiv Detail & Related papers (2023-12-05T16:53:20Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Safe Explicable Planning [3.3869539907606603]
We propose Safe Explicable Planning (SEP) to support the specification of a safety bound.
Our approach generalizes the consideration of multiple objectives stemming from multiple models.
We provide formal proofs that validate the desired theoretical properties of these methods.
arXiv Detail & Related papers (2023-04-04T21:49:02Z) - Why Should Adversarial Perturbations be Imperceptible? Rethink the
Research Paradigm in Adversarial NLP [83.66405397421907]
We rethink the research paradigm of textual adversarial samples in security scenarios.
We first collect, process, and release a security datasets collection Advbench.
Next, we propose a simple method based on rules that can easily fulfill the actual adversarial goals to simulate real-world attack methods.
arXiv Detail & Related papers (2022-10-19T15:53:36Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.