FIST: A Structured Threat Modeling Framework for Fraud Incidents
- URL: http://arxiv.org/abs/2506.05740v1
- Date: Fri, 06 Jun 2025 04:54:49 GMT
- Title: FIST: A Structured Threat Modeling Framework for Fraud Incidents
- Authors: Yu-Chen Dai, Lu-An Chen, Sy-Jye Her, Yu-Xian Jiang,
- Abstract summary: FIST is an innovative structured threat modeling methodology specifically designed for fraud scenarios.<n>It incorporates social engineering tactics, stage-based behavioral decomposition, and detailed attack technique mapping into a reusable knowledge base.<n>FIST aims to enhance the efficiency of fraud detection and the standardization of threat intelligence sharing.
- Score: 1.686366122228481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fraudulent activities are rapidly evolving, employing increasingly diverse and sophisticated methods that pose serious threats to individuals, organizations, and society. This paper proposes the FIST Framework (Fraud Incident Structured Threat Framework), an innovative structured threat modeling methodology specifically designed for fraud scenarios. Inspired by MITRE ATT\&CK and DISARM, FIST systematically incorporates social engineering tactics, stage-based behavioral decomposition, and detailed attack technique mapping into a reusable knowledge base. FIST aims to enhance the efficiency of fraud detection and the standardization of threat intelligence sharing, promoting collaboration and a unified language across organizations and sectors. The framework integrates interdisciplinary insights from cybersecurity, criminology, and behavioral science, addressing both technical vectors and psychological manipulation mechanisms in fraud. This approach enables fine-grained analysis of fraud incidents, supporting automated detection, quantitative risk assessment, and standardized incident reporting. The effectiveness of the framework is further validated through real-world case studies, demonstrating its value in bridging academic research and practical applications, and laying the foundation for an intelligence-driven anti-fraud ecosystem. To the best of our knowledge, FIST is the first systematic, open-source fraud threat modeling framework that unifies both technical and psychological aspects, and is made freely available to foster collaboration between academia and industry.
Related papers
- Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems [5.787505062263962]
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security.<n>This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools.
arXiv Detail & Related papers (2025-08-03T17:02:05Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Organizational Adaptation to Generative AI in Cybersecurity: A Systematic Review [0.0]
This research employs systematic document analysis and comparative case study methodology to examine how cybersecurity organizations adapt their threat modeling frameworks and operational processes to address generative artificial intelligence integration.<n>The research identifies three primary adaptation patterns: Large Language Model integration for security applications, GenAI frameworks for risk detection and response automation, and AI/ML integration for threat hunting.<n> Organizations with mature security infrastructures, particularly in finance and critical infrastructure sectors, demonstrate higher readiness through structured governance approaches, dedicated AI teams, and robust incident response processes.
arXiv Detail & Related papers (2025-05-31T18:16:11Z) - Bringing Order Amidst Chaos: On the Role of Artificial Intelligence in Secure Software Engineering [0.0]
The ever-evolving technological landscape offers both opportunities and threats, creating a dynamic space where chaos and order compete.<n>Secure software engineering (SSE) must continuously address vulnerabilities that endanger software systems.<n>This thesis seeks to bring order to the chaos in SSE by addressing domain-specific differences that impact AI accuracy.
arXiv Detail & Related papers (2025-01-09T11:38:58Z) - Cyber-Physical Security Vulnerabilities Identification and Classification in Smart Manufacturing -- A Defense-in-Depth Driven Framework and Taxonomy [0.0]
Existing solutions fall short in addressing the complex, domain-specific vulnerabilities of manufacturing environments.<n>This paper redefines vulnerabilities in the manufacturing context by introducing a novel characterization based on the duality between vulnerabilities and defenses.<n>We identify and classify vulnerabilities across the manufacturing cyberspace, human element, post-production inspection systems, production process monitoring, and organizational policies and procedures.
arXiv Detail & Related papers (2024-12-29T11:41:06Z) - Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies [4.606106768645647]
Ad adversarial examples (AE) pose a critical challenge to the robustness and reliability of deep learning-based systems.<n>This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications.<n>We explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques.
arXiv Detail & Related papers (2024-12-16T01:54:07Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - AsIf: Asset Interface Analysis of Industrial Automation Devices [1.3216177247621483]
Industrial control systems are increasingly adopting IT solutions, including communication standards and protocols.
As these systems become more decentralized and interconnected, a critical need for enhanced security measures arises.
Threat modeling is traditionally performed in structured brainstorming sessions involving domain and security experts.
We propose a method for the analysis of assets in industrial systems, with special focus on physical threats.
arXiv Detail & Related papers (2024-09-26T07:19:15Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.