Agentic AI for Financial Crime Compliance
- URL: http://arxiv.org/abs/2509.13137v1
- Date: Tue, 16 Sep 2025 14:53:51 GMT
- Title: Agentic AI for Financial Crime Compliance
- Authors: Henrik Axelsen, Valdemar Licht, Jan Damsgaard,
- Abstract summary: This paper presents the design and deployment of an agentic AI system for financial crime compliance (FCC) in digitally native financial platforms.<n>The contribution includes a reference architecture, a real-world prototype, and insights into how Agentic AI can reconfigure under regulatory constraints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The cost and complexity of financial crime compliance (FCC) continue to rise, often without measurable improvements in effectiveness. While AI offers potential, most solutions remain opaque and poorly aligned with regulatory expectations. This paper presents the design and deployment of an agentic AI system for FCC in digitally native financial platforms. Developed through an Action Design Research (ADR) process with a fintech firm and regulatory stakeholders, the system automates onboarding, monitoring, investigation, and reporting, emphasizing explainability, traceability, and compliance-by-design. Using artifact-centric modeling, it assigns clearly bounded roles to autonomous agents and enables task-specific model routing and audit logging. The contribution includes a reference architecture, a real-world prototype, and insights into how Agentic AI can reconfigure FCC workflows under regulatory constraints. Our findings extend IS literature on AI-enabled compliance by demonstrating how automation, when embedded within accountable governance structures, can support transparency and institutional trust in high-stakes, regulated environments.
Related papers
- AI Deployment Authorisation: A Global Standard for Machine-Readable Governance of High-Risk Artificial Intelligence [0.0]
This paper introduces the AI Deployment Authorisation Score (ADAS), a machine-readable regulatory framework that evaluates AI systems.<n>ADAS produces a cryptographically verifiable deployment certificate that regulators, insurers, and infrastructure operators can consume as a license to operate.
arXiv Detail & Related papers (2026-01-11T18:14:20Z) - Agentic AI Microservice Framework for Deepfake and Document Fraud Detection in KYC Pipelines [0.0]
Synthetic media, presentation attacks, and document forgeries have created significant vulnerabilities in Know Your Customer (KYC)<n>This paper proposes an Agentic AI Microservice Framework that integrates vision models, liveness assessment, deepfake detection, OCR-based document forensics, multimodal identity linking, and a policy driven risk engine.
arXiv Detail & Related papers (2026-01-09T17:01:40Z) - Adaptation of Agentic AI [162.63072848575695]
We unify the rapidly expanding research landscape into a systematic framework that spans both agent adaptations and tool adaptations.<n>We demonstrate that this framework helps clarify the design space of adaptation strategies in agentic AI.<n>We then review the representative approaches in each category, analyze their strengths and limitations, and highlight key open challenges and future opportunities.
arXiv Detail & Related papers (2025-12-18T08:38:51Z) - AI Application in Anti-Money Laundering for Sustainable and Transparent Financial Systems [1.9426782472131299]
Money laundering and financial fraud remain major threats to global financial stability, costing trillions annually and challenging regulatory oversight.<n>This paper reviews how artificial intelligence (AI) applications can modernize Anti-Money Laundering (AML) by improving detection accuracy, lowering false-positive rates, and reducing the operational burden of manual investigations.
arXiv Detail & Related papers (2025-12-06T01:37:24Z) - AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework [31.261980405052938]
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and governance documentation.<n>This paper introduces the AI Risk Scanning (AIRS) Framework, a threat-model-based, evidence-generating framework designed to operationalize AI assurance.
arXiv Detail & Related papers (2025-11-16T16:10:38Z) - Enabling Regulatory Multi-Agent Collaboration: Architecture, Challenges, and Solutions [30.046299694187855]
Large language models (LLMs)-empowered autonomous agents are transforming both digital and physical environments by enabling adaptive, multi-agent collaboration.<n>We propose a blockchain-enabled layered architecture for regulatory agent collaboration, comprising an agent layer, a blockchain data layer, and a regulatory application layer.<n>Our approach establishes a systematic foundation for trustworthy, resilient, and scalable regulatory mechanisms in large-scale agent ecosystems.
arXiv Detail & Related papers (2025-09-11T07:46:00Z) - Co-Investigator AI: The Rise of Agentic AI for Smarter, Trustworthy AML Compliance Narratives [2.7295959384567356]
Co-Investigator AI is an agentic framework optimized to produce Suspicious Activity Reports (SARs) significantly faster and with greater accuracy than traditional methods.<n>We demonstrate its ability to streamline SAR drafting, align narratives with regulatory expectations, and enable compliance teams to focus on higher-order analytical work.
arXiv Detail & Related papers (2025-09-10T08:16:04Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - Governable AI: Provable Safety Under Extreme Threat Models [31.36879992618843]
We propose a Governable AI (GAI) framework that shifts from traditional internal constraints to externally enforced structural compliance.<n>The GAI framework is composed of a simple yet reliable, fully deterministic, powerful, flexible, and general-purpose rule enforcement module (REM); governance rules; and a governable secure super-platform (GSSP) that offers end-to-end protection against compromise or subversion by AI.
arXiv Detail & Related papers (2025-08-28T04:22:59Z) - AI-Governed Agent Architecture for Web-Trustworthy Tokenization of Alternative Assets [3.0801485631077457]
Alternative Assets tokenization is transforming non-traditional financial instruments are represented and traded on the web.<n>This paper proposes an AI-governed agent architecture that integrates intelligent agents with blockchain to achieve web-trustworthy tokenization of alternative assets.
arXiv Detail & Related papers (2025-06-30T11:28:51Z) - FinRobot: Generative Business Process AI Agents for Enterprise Resource Planning in Finance [6.494553545846438]
We present the first AI-native framework for ERP systems, introducing a novel architecture of Generative Business Process AI Agents.<n>The proposed system integrates generative AI with business process modeling and multi-agent orchestration, enabling end-to-end automation.<n>We show that GBPAs achieve up to 40% reduction in processing time, 94% drop in error rate, and improved regulatory compliance.
arXiv Detail & Related papers (2025-06-02T08:22:28Z) - Conformal Calibration: Ensuring the Reliability of Black-Box AI in Wireless Systems [36.407171992845456]
The paper reviews conformal calibration, a general framework that moves beyond the state of the art by adopting computationally lightweight, advanced statistical tools.<n>By weaving conformal calibration into the AI model lifecycle, network operators can establish confidence in black-box AI models as a dependable enabling technology for wireless systems.
arXiv Detail & Related papers (2025-04-12T19:05:00Z) - Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal [39.63122342758896]
The rapid rise of AI has split model serving between open-weight distribution and opaque API-based approaches.<n>This position paper introduces, rigorously formulates, and champions the Open-access, Monetizable, and Loyal (OML) paradigm for AI model serving.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Meta-Sealing: A Revolutionizing Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI System [0.0]
This research introduces Meta-Sealing, a cryptographic framework that fundamentally changes integrity verification in AI systems.
The framework combines advanced cryptography with distributed verification, delivering tamper-evident guarantees that achieve both mathematical rigor and computational efficiency.
arXiv Detail & Related papers (2024-10-31T15:31:22Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.