Threat Modelling in Virtual Assistant Hub Devices Compared With User
Risk Perceptions (2021)
- URL: http://arxiv.org/abs/2301.12772v1
- Date: Mon, 30 Jan 2023 10:36:04 GMT
- Title: Threat Modelling in Virtual Assistant Hub Devices Compared With User
Risk Perceptions (2021)
- Authors: Beckett LeClair
- Abstract summary: This study explores different threat modelling methodologies as applied to the security of virtual assistant hubs in the home.
Five approaches (STRIDE, CVSS, Attack Trees, LINDUNN GO, and Quantitative TMM) were compared as these were determined to be either the most prominent or potentially applicable to an IoT context.
Key findings suggest that a combination of STRIDE and LINDUNN GO is optimal for elucidating threats under the pressures of a tight industry deadline cycle.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite increasing uptake, there are still many concerns as to the security
of virtual assistant hubs (such as Google Nest and Amazon Alexa) in the home.
Consumer fears have been somewhat exacerbated by widely-publicised privacy
breaches, and the continued prevalence of high-profile attacks targeting IoT
networks. Literature suggests a considerable knowledge gap between consumer
understanding and the actual threat environment; furthermore, little work has
been done to compare which threat modelling approach(es) would be most
appropriate for these devices, in order to elucidate the threats which can then
be communicated to consumers. There is therefore an opportunity to explore
different threat modelling methodologies as applied to this context, and then
use the findings to prototype a software aimed at educating consumers in an
accessible manner. Five approaches (STRIDE, CVSS, Attack Trees (a.k.a. Threat
Trees), LINDUNN GO, and Quantitative TMM) were compared as these were
determined to be either the most prominent or potentially applicable to an IoT
context. The key findings suggest that a combination of STRIDE and LINDUNN GO
is optimal for elucidating threats under the pressures of a tight industry
deadline cycle (with potential for elements of CVSS depending on time
constraints), and that the trialled software prototype was effective at
engaging consumers and educating about device security. Such findings are
useful for IoT device manufacturers seeking to optimally model threats, or
other stakeholders seeking ways to increase information security knowledge
among consumers.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Securing the Future: Proactive Threat Hunting for Sustainable IoT Ecosystems [0.0]
This paper explores the concept of proactive threat hunting as a pivotal strategy for enhancing the security and sustainability of IoT systems.
By improving the security posture of IoT devices this approach significantly contributes to extending IoT operational lifespan and reduces environmental impact.
arXiv Detail & Related papers (2024-06-21T00:44:17Z) - Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach [0.0]
Decentralized threat intelligence on edge devices represents a promising paradigm for enhancing cybersecurity on resource-constrained edge devices.
This approach involves the deployment of lightweight machine learning models directly onto edge devices to analyze local data streams, such as network traffic and system logs, in real-time.
Our proposed framework can improve edge computing security by providing better security in cyber threat detection and mitigation by isolating the edge devices from the network.
arXiv Detail & Related papers (2024-05-14T16:40:37Z) - Generative AI in Cybersecurity [0.0]
Generative Artificial Intelligence (GAI) has been pivotal in reshaping the field of data analysis, pattern recognition, and decision-making processes.
As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks.
The study highlights the critical need for organizations to proactively identify and develop more complex defensive strategies to counter the sophisticated employment of GAI in malware creation.
arXiv Detail & Related papers (2024-05-02T19:03:11Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Semantic Information Marketing in The Metaverse: A Learning-Based
Contract Theory Framework [68.8725783112254]
We address the problem of designing incentive mechanisms by a virtual service provider (VSP) to hire sensing IoT devices to sell their sensing data.
Due to the limited bandwidth, we propose to use semantic extraction algorithms to reduce the delivered data by the sensing IoT devices.
We propose a novel iterative contract design and use a new variant of multi-agent reinforcement learning (MARL) to solve the modelled multi-dimensional contract problem.
arXiv Detail & Related papers (2023-02-22T15:52:37Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.