Model-Driven Security Analysis of Self-Sovereign Identity Systems
- URL: http://arxiv.org/abs/2406.00620v1
- Date: Sun, 2 Jun 2024 05:44:32 GMT
- Title: Model-Driven Security Analysis of Self-Sovereign Identity Systems
- Authors: Yepeng Ding, Hiroyuki Sato,
- Abstract summary: We propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems.
Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic.
We present typical vulnerable patterns verified by SecureSSI.
- Score: 2.5475486924467075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Best practices of self-sovereign identity (SSI) are being intensively explored in academia and industry. Reusable solutions obtained from best practices are generalized as architectural patterns for systematic analysis and design reference, which significantly boosts productivity and increases the dependability of future implementations. For security-sensitive projects, architects make architectural decisions with careful consideration of security issues and solutions based on formal analysis and experiment results. In this paper, we propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems with respect to a threat model built on our investigation of real-world security concerns. Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic and automatically generates programs for verification via model checking. Besides, we present typical vulnerable patterns verified by SecureSSI, a standalone integrated development environment, integrating commonly used pattern and attacker models to practicalize our framework.
Related papers
- "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - AttackNet: Enhancing Biometric Security via Tailored Convolutional Neural Network Architectures for Liveness Detection [20.821562115822182]
AttackNet is a bespoke Convolutional Neural Network architecture designed to combat spoofing threats in biometric systems.
It offers a layered defense mechanism, seamlessly transitioning from low-level feature extraction to high-level pattern discernment.
Benchmarking our model across diverse datasets affirms its prowess, showcasing superior performance metrics in comparison to contemporary models.
arXiv Detail & Related papers (2024-02-06T07:22:50Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to model and analyze multi-step attacks on computer networks.
This paper introduces an analysis-driven framework for AG generation.
It enables real-time attack path analysis before the completion of the AG generation with a quantifiable statistical significance.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - Towards Responsible Generative AI: A Reference Architecture for Designing Foundation Model based Agents [28.406492378232695]
Foundation model based agents derive their autonomy from the capabilities of foundation models.
This paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents.
arXiv Detail & Related papers (2023-11-22T04:21:47Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Architecting Safer Autonomous Aviation Systems [1.2599533416395767]
This paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications.
Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle.
arXiv Detail & Related papers (2023-01-09T21:02:18Z) - Towards automation of threat modeling based on a semantic model of
attack patterns and weaknesses [0.0]
This work considers challenges of building and usage a formal knowledge base (model)
The proposed model can be used to learn relations between techniques, attack pattern, weaknesses, and vulnerabilities in order to build various threat landscapes.
arXiv Detail & Related papers (2021-12-08T11:13:47Z) - STARdom: an architecture for trusted and secure human-centered
manufacturing systems [4.093985503448998]
We propose an architecture that integrates forecasts, Explainable Artificial Intelligence, supports collecting users' feedback, and uses Active Learning and Simulated Reality to enhance forecasts.
We tailor it for the domain of demand forecasting and validate it on a real-world case study.
arXiv Detail & Related papers (2021-04-02T11:00:20Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.