Model-Driven Security Analysis of Self-Sovereign Identity Systems
- URL: http://arxiv.org/abs/2406.00620v1
- Date: Sun, 2 Jun 2024 05:44:32 GMT
- Title: Model-Driven Security Analysis of Self-Sovereign Identity Systems
- Authors: Yepeng Ding, Hiroyuki Sato,
- Abstract summary: We propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems.
Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic.
We present typical vulnerable patterns verified by SecureSSI.
- Score: 2.5475486924467075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Best practices of self-sovereign identity (SSI) are being intensively explored in academia and industry. Reusable solutions obtained from best practices are generalized as architectural patterns for systematic analysis and design reference, which significantly boosts productivity and increases the dependability of future implementations. For security-sensitive projects, architects make architectural decisions with careful consideration of security issues and solutions based on formal analysis and experiment results. In this paper, we propose a model-driven security analysis framework for analyzing architectural patterns of SSI systems with respect to a threat model built on our investigation of real-world security concerns. Our framework mechanizes a modeling language to formalize patterns and threats with security properties in temporal logic and automatically generates programs for verification via model checking. Besides, we present typical vulnerable patterns verified by SecureSSI, a standalone integrated development environment, integrating commonly used pattern and attacker models to practicalize our framework.
Related papers
- Model Developmental Safety: A Safety-Centric Method and Applications in Vision-Language Models [75.8161094916476]
We study how to develop a pretrained vision-language model (aka the CLIP model) for acquiring new capabilities or improving existing capabilities of image classification.
Our experiments on improving vision perception capabilities on autonomous driving and scene recognition datasets demonstrate the efficacy of the proposed approach.
arXiv Detail & Related papers (2024-10-04T22:34:58Z) - AsIf: Asset Interface Analysis of Industrial Automation Devices [1.3216177247621483]
Industrial control systems are increasingly adopting IT solutions, including communication standards and protocols.
As these systems become more decentralized and interconnected, a critical need for enhanced security measures arises.
Threat modeling is traditionally performed in structured brainstorming sessions involving domain and security experts.
We propose a method for the analysis of assets in industrial systems, with special focus on physical threats.
arXiv Detail & Related papers (2024-09-26T07:19:15Z) - Building a Cybersecurity Risk Metamodel for Improved Method and Tool Integration [0.38073142980732994]
We report on our experience in applying a model-driven approach on the initial risk analysis step in connection with a later security testing.
Our work rely on a common metamodel which is used to map, synchronise and ensure information traceability across different tools.
arXiv Detail & Related papers (2024-09-12T10:18:26Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Towards Responsible Generative AI: A Reference Architecture for Designing Foundation Model based Agents [28.406492378232695]
Foundation model based agents derive their autonomy from the capabilities of foundation models.
This paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents.
arXiv Detail & Related papers (2023-11-22T04:21:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Architecting Safer Autonomous Aviation Systems [1.2599533416395767]
This paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications.
Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle.
arXiv Detail & Related papers (2023-01-09T21:02:18Z) - Towards automation of threat modeling based on a semantic model of
attack patterns and weaknesses [0.0]
This work considers challenges of building and usage a formal knowledge base (model)
The proposed model can be used to learn relations between techniques, attack pattern, weaknesses, and vulnerabilities in order to build various threat landscapes.
arXiv Detail & Related papers (2021-12-08T11:13:47Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.