Assessing Risks and Modeling Threats in the Internet of Things
- URL: http://arxiv.org/abs/2110.07771v1
- Date: Thu, 14 Oct 2021 23:36:00 GMT
- Title: Assessing Risks and Modeling Threats in the Internet of Things
- Authors: Paul Griffioen and Bruno Sinopoli
- Abstract summary: We develop an IoT attack taxonomy that describes the adversarial assets, adversarial actions, exploitable vulnerabilities, and compromised properties that are components of any IoT attack.
We use this IoT attack taxonomy as the foundation for designing a joint risk assessment and maturity assessment framework.
The usefulness of this IoT framework is highlighted by case study implementations in the context of multiple industrial manufacturing companies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Threat modeling and risk assessments are common ways to identify, estimate,
and prioritize risk to national, organizational, and individual operations and
assets. Several threat modeling and risk assessment approaches have been
proposed prior to the advent of the Internet of Things (IoT) that focus on
threats and risks in information technology (IT). Due to shortcomings in these
approaches and the fact that there are significant differences between the IoT
and IT, we synthesize and adapt these approaches to provide a threat modeling
framework that focuses on threats and risks in the IoT. In doing so, we develop
an IoT attack taxonomy that describes the adversarial assets, adversarial
actions, exploitable vulnerabilities, and compromised properties that are
components of any IoT attack. We use this IoT attack taxonomy as the foundation
for designing a joint risk assessment and maturity assessment framework that is
implemented as an interactive online tool. The assessment framework this tool
encodes provides organizations with specific recommendations about where
resources should be devoted to mitigate risk. The usefulness of this IoT
framework is highlighted by case study implementations in the context of
multiple industrial manufacturing companies, and the interactive implementation
of this framework is available at http://iotrisk.andrew.cmu.edu.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Security Risks Concerns of Generative AI in the IoT [9.35121449708677]
In an era where the Internet of Things (IoT) intersects increasingly with generative Artificial Intelligence (AI), this article scrutinizes the emergent security risks inherent in this integration.
We explore how generative AI drives innovation in IoT and we analyze the potential for data breaches when using generative AI and the misuse of generative AI technologies in IoT ecosystems.
arXiv Detail & Related papers (2024-03-29T20:28:30Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - TMAP: A Threat Modeling and Attack Path Analysis Framework for Industrial IoT Systems (A Case Study of IoM and IoP) [2.9922995594704984]
To deploy secure Industrial Control and Production Systems (ICPS) in smart factories, cyber threats and risks must be addressed.
Current approaches for threat modeling in cyber-physical systems (CPS) are ad hoc and inefficient.
This paper proposes a novel quantitative threat modeling approach, aiming to identify probable attack vectors, assess the path of attacks, and evaluate the magnitude of each vector.
arXiv Detail & Related papers (2023-12-23T18:32:53Z) - Measures of Resilience to Cyber Contagion -- An Axiomatic Approach for Complex Systems [44.99833362998488]
We introduce a novel class of risk measures designed for the management of systemic risk in networks.
In contrast to prevailing approaches, these risk measures target the topological configuration of the network in order to mitigate the propagation risk of contagious threats.
arXiv Detail & Related papers (2023-12-21T14:29:04Z) - Model evaluation for extreme risks [46.53170857607407]
Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills.
We explain why model evaluation is critical for addressing extreme risks.
arXiv Detail & Related papers (2023-05-24T16:38:43Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z) - Risk Management Framework for Machine Learning Security [7.678455181587705]
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry.
In this paper, we outline a novel framework to guide the risk management process for organizations reliant on machine learning models.
arXiv Detail & Related papers (2020-12-09T06:21:34Z) - An Automated, End-to-End Framework for Modeling Attacks From
Vulnerability Descriptions [46.40410084504383]
In order to derive a relevant attack graph, up-to-date information on known attack techniques should be represented as interaction rules.
We present a novel, end-to-end, automated framework for modeling new attack techniques from textual description of a security vulnerability.
arXiv Detail & Related papers (2020-08-10T19:27:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.