Towards Principled Risk Scores for Space Cyber Risk Management
- URL: http://arxiv.org/abs/2402.02635v1
- Date: Sun, 4 Feb 2024 23:01:49 GMT
- Title: Towards Principled Risk Scores for Space Cyber Risk Management
- Authors: Ekzhin Ear, Brandon Bailey, Shouhuai Xu,
- Abstract summary: The Aerospace Corporation proposed Notional Risk Scores (NRS) within their Space Attack Research and Tactic Analysis framework.
While intended for adoption by practitioners, NRS has not been analyzed with real-world scenarios, putting its effectiveness into question.
In this paper we analyze NRS via a real-world cyber attack scenario against a satellite, and characterize the strengths, weaknesses, and applicability of NRS.
- Score: 5.715413347864052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Space is an emerging domain critical to humankind. Correspondingly, space cybersecurity is an emerging field with much research to be done. To help space cybersecurity practitioners better manage cyber risks, The Aerospace Corporation proposed Notional Risk Scores (NRS) within their Space Attack Research and Tactic Analysis (SPARTA) framework, which can be applied to quantify the cyber risks associated with space infrastructures and systems. While intended for adoption by practitioners, NRS has not been analyzed with real-world scenarios, putting its effectiveness into question. In this paper we analyze NRS via a real-world cyber attack scenario against a satellite, and characterize the strengths, weaknesses, and applicability of NRS. The characterization prompts us to propose a set of desired properties to guide the design of future NRS. As a first step along this direction, we further propose a formalism to serve as a baseline for designing future NRS with those desired properties.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - A Synergistic Approach In Network Intrusion Detection By Neurosymbolic AI [6.315966022962632]
This paper explores the potential of incorporating Neurosymbolic Artificial Intelligence (NSAI) into Network Intrusion Detection Systems (NIDS)
NSAI combines deep learning's data-driven strengths with symbolic AI's logical reasoning to tackle the dynamic challenges in cybersecurity.
The inclusion of NSAI in NIDS marks potential advancements in both the detection and interpretation of intricate network threats.
arXiv Detail & Related papers (2024-06-03T02:24:01Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Evaluating the Security of Satellite Systems [24.312198733476063]
This paper presents a comprehensive taxonomy of adversarial tactics, techniques, and procedures explicitly targeting satellites.
We examine the space ecosystem including the ground, space, Communication, and user segments, highlighting their architectures, functions, and vulnerabilities.
We propose a novel extension of the MITRE ATT&CK framework to categorize satellite attack techniques across the adversary lifecycle from reconnaissance to impact.
arXiv Detail & Related papers (2023-12-03T09:38:28Z) - Critical Infrastructure Security Goes to Space: Leveraging Lessons Learned on the Ground [2.1180074160333815]
Space systems enable essential communications, navigation, imaging and sensing for a variety of domains.
While the space environment brings unique constraints to managing cybersecurity risks, lessons learned about risks and effective defenses in other critical infrastructure domains can help us to design effective defenses for space systems.
This paper provides an overview of ICS and space system commonalities, lessons learned about cybersecurity for ICS that can be applied to space systems, and recommendations for future research and development to secure increasingly critical space systems.
arXiv Detail & Related papers (2023-09-26T19:53:40Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - Targeted Data Poisoning Attack on News Recommendation System [10.1794489884216]
News Recommendation System (NRS) has become a fundamental technology to many online news services.
We propose a novel approach to poison the NRS, which is to perturb contents of some browsed news that results in the manipulation of the rank of the target news.
We design a reinforcement learning framework, called TDP-CP, which contains a two-stage hierarchical model to reduce the searching space.
arXiv Detail & Related papers (2022-03-04T16:01:11Z) - Physical Side-Channel Attacks on Embedded Neural Networks: A Survey [0.32634122554913997]
Neural Networks (NN) are expected to become ubiquitous in IoT systems by transforming all sorts of real-world applications.
embedded NN implementations are vulnerable to Side-Channel Analysis (SCA) attacks.
This paper surveys state-of-the-art physical SCA attacks relative to the implementation of embedded NNs on micro-controllers and FPGAs.
arXiv Detail & Related papers (2021-10-21T17:18:52Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.