Assessing Risks and Modeling Threats in the Internet of Things
- URL: http://arxiv.org/abs/2110.07771v1
- Date: Thu, 14 Oct 2021 23:36:00 GMT
- Title: Assessing Risks and Modeling Threats in the Internet of Things
- Authors: Paul Griffioen and Bruno Sinopoli
- Abstract summary: We develop an IoT attack taxonomy that describes the adversarial assets, adversarial actions, exploitable vulnerabilities, and compromised properties that are components of any IoT attack.
We use this IoT attack taxonomy as the foundation for designing a joint risk assessment and maturity assessment framework.
The usefulness of this IoT framework is highlighted by case study implementations in the context of multiple industrial manufacturing companies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Threat modeling and risk assessments are common ways to identify, estimate,
and prioritize risk to national, organizational, and individual operations and
assets. Several threat modeling and risk assessment approaches have been
proposed prior to the advent of the Internet of Things (IoT) that focus on
threats and risks in information technology (IT). Due to shortcomings in these
approaches and the fact that there are significant differences between the IoT
and IT, we synthesize and adapt these approaches to provide a threat modeling
framework that focuses on threats and risks in the IoT. In doing so, we develop
an IoT attack taxonomy that describes the adversarial assets, adversarial
actions, exploitable vulnerabilities, and compromised properties that are
components of any IoT attack. We use this IoT attack taxonomy as the foundation
for designing a joint risk assessment and maturity assessment framework that is
implemented as an interactive online tool. The assessment framework this tool
encodes provides organizations with specific recommendations about where
resources should be devoted to mitigate risk. The usefulness of this IoT
framework is highlighted by case study implementations in the context of
multiple industrial manufacturing companies, and the interactive implementation
of this framework is available at http://iotrisk.andrew.cmu.edu.
Related papers
- A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation [0.3413711585591077]
As generative AI systems, including large language models (LLMs) and diffusion models, advance rapidly, their growing adoption has led to new and complex security risks.
This paper introduces a novel formal framework for categorizing and mitigating these emergent security risks.
We identify previously under-explored risks, including latent space exploitation, multi-modal cross-attack vectors, and feedback-loop-induced model degradation.
arXiv Detail & Related papers (2024-10-15T02:51:32Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Towards Threat Modelling of IoT Context-Sharing Platforms [4.098759138493994]
We propose a framework for threat modelling and security analysis of a generic IoT context-sharing solution.
We identify significant security challenges in the design of IoT context-sharing platforms.
Our threat modelling provides an in-depth analysis of the techniques and sub-techniques adversaries may use to exploit these systems.
arXiv Detail & Related papers (2024-08-22T02:41:06Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Security Risks Concerns of Generative AI in the IoT [9.35121449708677]
In an era where the Internet of Things (IoT) intersects increasingly with generative Artificial Intelligence (AI), this article scrutinizes the emergent security risks inherent in this integration.
We explore how generative AI drives innovation in IoT and we analyze the potential for data breaches when using generative AI and the misuse of generative AI technologies in IoT ecosystems.
arXiv Detail & Related papers (2024-03-29T20:28:30Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Holistic Adversarial Robustness of Deep Learning Models [91.34155889052786]
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models.
arXiv Detail & Related papers (2022-02-15T05:30:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.