The Danger Within: Insider Threat Modeling Using Business Process Models
- URL: http://arxiv.org/abs/2406.01135v2
- Date: Tue, 3 Sep 2024 06:51:56 GMT
- Title: The Danger Within: Insider Threat Modeling Using Business Process Models
- Authors: Jan von der Assen, Jasmin Hochuli, Thomas GrĂ¼bl, Burkhard Stiller,
- Abstract summary: This paper develops a novel insider threat knowledge base and a threat modeling application that leverages Business Process Modeling and Notation (BPMN)
The results indicate that even without annotation, BPMN diagrams can be leveraged to automatically identify insider threats in an organization.
- Score: 0.259990372084357
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Threat modeling has been successfully applied to model technical threats within information systems. However, a lack of methods focusing on non-technical assets and their representation can be observed in theory and practice. Following the voices of industry practitioners, this paper explored how to model insider threats based on business process models. Hence, this study developed a novel insider threat knowledge base and a threat modeling application that leverages Business Process Modeling and Notation (BPMN). Finally, to understand how well the theoretic knowledge and its prototype translate into practice, the study conducted a real-world case study of an IT provider's business process and an experimental deployment for a real voting process. The results indicate that even without annotation, BPMN diagrams can be leveraged to automatically identify insider threats in an organization.
Related papers
- AsIf: Asset Interface Analysis of Industrial Automation Devices [1.3216177247621483]
Industrial control systems are increasingly adopting IT solutions, including communication standards and protocols.
As these systems become more decentralized and interconnected, a critical need for enhanced security measures arises.
Threat modeling is traditionally performed in structured brainstorming sessions involving domain and security experts.
We propose a method for the analysis of assets in industrial systems, with special focus on physical threats.
arXiv Detail & Related papers (2024-09-26T07:19:15Z) - Evaluating Copyright Takedown Methods for Language Models [100.38129820325497]
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material.
This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs.
We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches.
arXiv Detail & Related papers (2024-06-26T18:09:46Z) - Science based AI model certification for new operational environments with application in traffic state estimation [1.2186759689780324]
The expanding role of Artificial Intelligence (AI) in diverse engineering domains highlights the challenges associated with deploying AI models in new operational environments.
This paper proposes a science-based certification methodology to assess the viability of employing pre-trained data-driven models in new operational environments.
arXiv Detail & Related papers (2024-05-13T16:28:00Z) - Science based AI model certification for untrained operational environments with application in traffic state estimation [1.2186759689780324]
The expanding role of Artificial Intelligence (AI) in diverse engineering domains highlights the challenges associated with deploying AI models in new operational environments.
This paper proposes a science-based certification methodology to assess the viability of employing pre-trained data-driven models in untrained operational environments.
arXiv Detail & Related papers (2024-03-21T03:01:25Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated
Learning [77.27443885999404]
Federated Learning (FL) is a setting for training machine learning models in distributed environments.
We propose a novel method, CANIFE, that uses carefully crafted samples by a strong adversary to evaluate the empirical privacy of a training round.
arXiv Detail & Related papers (2022-10-06T13:30:16Z) - AI Trust in business processes: The need for process-aware explanations [11.161025675113208]
Business process management (BPM) literature is rich in machine learning solutions.
Deep learning models including those from the NLP domain have been applied to process predictions.
We assert that a large reason for the lack of adoption of AI models in BPM is that business users are risk-averse and do not implicitly trust AI models.
arXiv Detail & Related papers (2020-01-21T13:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.