Towards Model Co-evolution Across Self-Adaptation Steps for Combined
Safety and Security Analysis
- URL: http://arxiv.org/abs/2309.09653v1
- Date: Mon, 18 Sep 2023 10:35:40 GMT
- Title: Towards Model Co-evolution Across Self-Adaptation Steps for Combined
Safety and Security Analysis
- Authors: Thomas Witte, Raffaela Groner, Alexander Raschke, Matthias Tichy,
Irdin Pekaric and Michael Felderer
- Abstract summary: We present several models that describe different aspects of a self-adaptive system.
We outline our idea of how these models can then be combined into an Attack-Fault Tree.
- Score: 44.339753503750735
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-adaptive systems offer several attack surfaces due to the communication
via different channels and the different sensors required to observe the
environment. Often, attacks cause safety to be compromised as well, making it
necessary to consider these two aspects together. Furthermore, the approaches
currently used for safety and security analysis do not sufficiently take into
account the intermediate steps of an adaptation. Current work in this area
ignores the fact that a self-adaptive system also reveals possible
vulnerabilities (even if only temporarily) during the adaptation. To address
this issue, we propose a modeling approach that takes into account the
different relevant aspects of a system, its adaptation process, as well as
safety hazards and security attacks. We present several models that describe
different aspects of a self-adaptive system and we outline our idea of how
these models can then be combined into an Attack-Fault Tree. This allows
modeling aspects of the system on different levels of abstraction and co-evolve
the models using transformations according to the adaptation of the system.
Finally, analyses can then be performed as usual on the resulting Attack-Fault
Tree.
Related papers
- Co-designing heterogeneous models: a distributed systems approach [0.40964539027092917]
This paper presents a modelling approach tailored for heterogeneous systems based on three elements.
An inferentialist interpretation of what a model is, a distributed systems metaphor and a co-design cycle describe the practical design and construction of the model.
We explore the suitability of this method in the context of three different security-oriented models.
arXiv Detail & Related papers (2024-07-10T13:35:38Z) - Bridging the Gap: Automated Analysis of Sancus [2.045495982086173]
We propose a new method to reduce this gap in the Sancus embedded security architecture.
Our method either finds attacks in the given threat model or gives probabilistic guarantees on the security of the system.
arXiv Detail & Related papers (2024-04-15T07:26:36Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Sustainable Adaptive Security [11.574868434725117]
We propose the notion of Sustainable Adaptive Security (SAS) which reflects enduring protection by augmenting adaptive security systems with the capability of mitigating newly discovered threats.
We use a smart home example to showcase how we can engineer the activities of the MAPE (Monitor, Analysis, Planning, and Execution) loop of systems satisfying sustainable adaptive security.
arXiv Detail & Related papers (2023-06-05T08:48:36Z) - Safety-Critical Adaptation in Self-Adaptive Systems [1.599072005190786]
This paper proposes a definition of a safety-critical self-adaptive system.
It describes a taxonomy for classifying adaptations into different types based on their impact on the system's safety and the system's safety case.
Each type in the taxonomy is illustrated using the example of a safety-critical self-adaptive water heating system.
arXiv Detail & Related papers (2022-09-30T21:16:34Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - Few-shot model-based adaptation in noisy conditions [15.498933340900606]
We propose to perform few-shot adaptation of dynamics models in noisy conditions using an uncertainty-aware Kalman filter-based neural network architecture.
We show that the proposed method, which explicitly addresses domain noise, improves few-shot adaptation error over a blackbox adaptation LSTM baseline.
The proposed method also allows for system analysis by analyzing hidden states of the model during and after adaptation.
arXiv Detail & Related papers (2020-10-16T13:59:35Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.