Applying Security Testing Techniques to Automotive Engineering
- URL: http://arxiv.org/abs/2309.09647v1
- Date: Mon, 18 Sep 2023 10:32:36 GMT
- Title: Applying Security Testing Techniques to Automotive Engineering
- Authors: Irdin Pekaric, Clemens Sauerwein and Michael Felderer
- Abstract summary: Security regression testing ensures that changes made to a system do not harm its security.
We present a systematic classification of available security regression testing approaches.
- Score: 4.2755847332268235
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The openness of modern IT systems and their permanent change make it
challenging to keep these systems secure. A combination of regression and
security testing called security regression testing, which ensures that changes
made to a system do not harm its security, are therefore of high significance
and the interest in such approaches has steadily increased. In this article we
present a systematic classification of available security regression testing
approaches based on a solid study of background and related work to sketch
which parts of the research area seem to be well understood and evaluated, and
which ones require further research. For this purpose we extract approaches
relevant to security regression testing from computer science digital libraries
based on a rigorous search and selection strategy. Then, we provide a
classification of these according to security regression approach criteria:
abstraction level, security issue, regression testing techniques, and tool
support, as well as evaluation criteria, for instance evaluated system,
maturity of the system, and evaluation measures. From the resulting
classification we derive observations with regard to the abstraction level,
regression testing techniques, tool support as well as evaluation, and finally
identify several potential directions of future research.
Related papers
- A Systematic Review of Edge Case Detection in Automated Driving: Methods, Challenges and Future Directions [0.3871780652193725]
This paper presents a practical, hierarchical review and systematic classification of edge case detection and assessment methodologies.
Our classification is structured on two levels: first, categorizing detection approaches according to AV modules, including perception-related and trajectory-related edge cases.
We introduce a new class called "knowledge-driven" approaches, which is largely overlooked in the literature.
arXiv Detail & Related papers (2024-10-11T03:32:20Z) - Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection [0.0]
We consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model.
We establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection.
arXiv Detail & Related papers (2024-03-12T11:38:45Z) - Towards new challenges of modern Pentest [0.0]
This study aims to present current methodologies, tools, and potential challenges applied to Pentest from an updated systematic literature review.
Also, it presents new challenges such as automation of techniques, management of costs associated with offensive security, and the difficulty in hiring qualified professionals to perform Pentest.
arXiv Detail & Related papers (2023-11-21T19:32:23Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - Detecting Misuse of Security APIs: A Systematic Review [5.329280109719902]
Security Application Programming Interfaces (APIs) are crucial for ensuring software security.
Their misuse introduces vulnerabilities, potentially leading to severe data breaches and substantial financial loss.
This study rigorously reviews the literature on detecting misuse of security APIs to gain a comprehensive understanding of this critical domain.
arXiv Detail & Related papers (2023-06-15T05:53:23Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Semantic Similarity-Based Clustering of Findings From Security Testing
Tools [1.6058099298620423]
In particular, it is common practice to use automated security testing tools that generate reports after inspecting a software artifact from multiple perspectives.
To identify these duplicate findings manually, a security expert has to invest resources like time, effort, and knowledge.
In this study, we investigated the potential of applying Natural Language Processing for clustering semantically similar security findings.
arXiv Detail & Related papers (2022-11-20T19:03:19Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.