Resilient Risk based Adaptive Authentication and Authorization (RAD-AA)
Framework
- URL: http://arxiv.org/abs/2208.02592v3
- Date: Tue, 29 Nov 2022 15:52:22 GMT
- Title: Resilient Risk based Adaptive Authentication and Authorization (RAD-AA)
Framework
- Authors: Jaimandeep Singh and Chintan Patel and Naveen Kumar Chaudhary
- Abstract summary: We discuss the design considerations for a secure and resilient authentication and authorization framework capable of self-adapting based on the risk scores and trust profiles.
We call this framework as Resilient Risk based Adaptive Authentication and Authorization (RAD-AA)
- Score: 3.9858496473361402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent cyber attacks, credential theft has emerged as one of the primary
vectors of gaining entry into the system. Once attacker(s) have a foothold in
the system, they use various techniques including token manipulation to elevate
the privileges and access protected resources. This makes authentication and
token based authorization a critical component for a secure and resilient cyber
system. In this paper we discuss the design considerations for such a secure
and resilient authentication and authorization framework capable of
self-adapting based on the risk scores and trust profiles. We compare this
design with the existing standards such as OAuth 2.0, OpenID Connect and SAML
2.0. We then study popular threat models such as STRIDE and PASTA and summarize
the resilience of the proposed architecture against common and relevant threat
vectors. We call this framework as Resilient Risk based Adaptive Authentication
and Authorization (RAD-AA). The proposed framework excessively increases the
cost for an adversary to launch and sustain any cyber attack and provides
much-needed strength to critical infrastructure. We also discuss the machine
learning (ML) approach for the adaptive engine to accurately classify
transactions and arrive at risk scores.
Related papers
- "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks [2.28438857884398]
Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges.
This study proposes an innovative security framework inspired by Control-Flow (CFA) mechanisms, traditionally used in cybersecurity.
We authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference.
arXiv Detail & Related papers (2024-03-15T04:03:34Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - RASP for LSASS: Preventing Mimikatz-Related Attacks [2.5782420501870296]
The Windows authentication infrastructure relies on the Local Security Authority system, with its integral component being lsass.exe.
This framework is not impervious, presenting vulnerabilities that attract threat actors with malicious intent.
By exploiting documented vulnerabilities sourced from the CVE database or leveraging sophisticated tools such as mimikatz, adversaries can successfully compromise user password-address information.
arXiv Detail & Related papers (2023-12-30T20:37:37Z) - Blockchain-based Zero Trust on the Edge [5.323279718522213]
This paper proposes a novel approach based on Zero Trust Architecture (ZTA) extended with blockchain to further enhance security.
The blockchain component serves as an immutable database for storing users' requests and is used to verify trustworthiness by analyzing and identifying potentially malicious user activities.
We discuss the framework, processes of the approach, and the experiments carried out on a testbed to validate its feasibility and applicability in the smart city context.
arXiv Detail & Related papers (2023-11-28T12:43:21Z) - Architecture of Smart Certificates for Web3 Applications Against
Cyberthreats in Financial Industry [2.795656498870966]
This study addresses security challenges associated with the current internet, specifically focusing on emerging technologies as blockchain and decentralized storage.
It also investigates the role of Web3 applications in shaping the future of the internet.
arXiv Detail & Related papers (2023-11-03T14:51:24Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.