Agentic AI Microservice Framework for Deepfake and Document Fraud Detection in KYC Pipelines
- URL: http://arxiv.org/abs/2601.06241v1
- Date: Fri, 09 Jan 2026 17:01:40 GMT
- Title: Agentic AI Microservice Framework for Deepfake and Document Fraud Detection in KYC Pipelines
- Authors: Chandra Sekhar Kubam,
- Abstract summary: Synthetic media, presentation attacks, and document forgeries have created significant vulnerabilities in Know Your Customer (KYC)<n>This paper proposes an Agentic AI Microservice Framework that integrates vision models, liveness assessment, deepfake detection, OCR-based document forensics, multimodal identity linking, and a policy driven risk engine.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid proliferation of synthetic media, presentation attacks, and document forgeries has created significant vulnerabilities in Know Your Customer (KYC) workflows across financial services, telecommunications, and digital-identity ecosystems. Traditional monolithic KYC systems lack the scalability and agility required to counter adaptive fraud. This paper proposes an Agentic AI Microservice Framework that integrates modular vision models, liveness assessment, deepfake detection, OCR-based document forensics, multimodal identity linking, and a policy driven risk engine. The system leverages autonomous micro-agents for task decomposition, pipeline orchestration, dynamic retries, and human-in-the-loop escalation. Experimental evaluations demonstrate improved detection accuracy, reduced latency, and enhanced resilience against adversarial inputs. The framework offers a scalable blueprint for regulated industries seeking robust, real-time, and privacy-preserving KYC verification.
Related papers
- Aegis: Towards Governance, Integrity, and Security of AI Voice Agents [52.7512082818639]
We propose Aegis, a framework for the governance, integrity, and security of voice agents.<n>We evaluate the framework through case studies in banking call centers, IT Support, and logistics.<n>We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility.
arXiv Detail & Related papers (2026-02-07T05:51:36Z) - Multi-Agent Collaborative Intrusion Detection for Low-Altitude Economy IoT: An LLM-Enhanced Agentic AI Framework [60.72591149679355]
The rapid expansion of low-altitude economy Internet of Things (LAE-IoT) networks has created unprecedented security challenges.<n>Traditional intrusion detection systems fail to tackle the unique characteristics of aerial IoT environments.<n>We introduce a large language model (LLM)-enabled agentic AI framework for enhancing intrusion detection in LAE-IoT networks.
arXiv Detail & Related papers (2026-01-25T12:47:25Z) - SAFE-QAQ: End-to-End Slow-Thinking Audio-Text Fraud Detection via Reinforcement Learning [52.29460857893198]
Existing fraud detection methods rely on transcribed text, suffering from ASR errors and missing crucial acoustic cues like vocal tone and environmental context.<n>We propose SAFE-QAQ, an end-to-end comprehensive framework for audio-based slow-thinking fraud detection.<n>Our framework introduces a dynamic risk assessment framework during live calls, enabling early detection and prevention of fraud.
arXiv Detail & Related papers (2026-01-04T06:09:07Z) - Agentic AI for Autonomous, Explainable, and Real-Time Credit Risk Decision-Making [0.0]
This paper presents an Agentic AI framework, or a system where AI agents view the world of dynamic credit independent of human observers.<n>The research introduces a multi-agent system with reinforcing learning, natural language reasoning, explainable AI modules, and real-time data absorption pipelines.<n>Findings indicate that decision speed, transparency and responsiveness is better than traditional credit scoring models.
arXiv Detail & Related papers (2025-12-22T23:30:38Z) - Quantigence: A Multi-Agent AI Framework for Quantum Security Research [0.0]
Cryptographically Relevant Quantum Computers (CRQCs) pose a structural threat to the global digital economy.<n>We present Quantigence, a theory-driven multi-agent AI framework for structured quantum-security analysis.
arXiv Detail & Related papers (2025-12-15T05:27:10Z) - OS-Sentinel: Towards Safety-Enhanced Mobile GUI Agents via Hybrid Validation in Realistic Workflows [77.95511352806261]
Computer-using agents powered by Vision-Language Models (VLMs) have demonstrated human-like capabilities in operating digital environments like mobile platforms.<n>We propose OS-Sentinel, a novel hybrid safety detection framework that combines a Formal Verifier for detecting explicit system-level violations with a Contextual Judge for assessing contextual risks and agent actions.
arXiv Detail & Related papers (2025-10-28T13:22:39Z) - SafeEvalAgent: Toward Agentic and Self-Evolving Safety Evaluation of LLMs [37.82193156438782]
This paper introduces a new paradigm of agentic safety evaluation, reframing evaluation as a continuous and self-evolving process.<n>We propose a novel multi-agent framework SafeEvalAgent, which autonomously ingests unstructured policy documents to generate and perpetually evolve a comprehensive safety benchmark.<n>Our experiments demonstrate the effectiveness of SafeEvalAgent, showing a consistent decline in model safety as the evaluation hardens.
arXiv Detail & Related papers (2025-09-30T11:20:41Z) - Agentic AI for Financial Crime Compliance [0.0]
This paper presents the design and deployment of an agentic AI system for financial crime compliance (FCC) in digitally native financial platforms.<n>The contribution includes a reference architecture, a real-world prototype, and insights into how Agentic AI can reconfigure under regulatory constraints.
arXiv Detail & Related papers (2025-09-16T14:53:51Z) - Expert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection [38.083049237330826]
This study explores the use of Large Language Models (LLMs) in software vulnerability assessment by simulating the identification of Python code with known Common Weaknessions (CWEs)<n>Our results indicate that while zero-shot prompting performs poorly, few-shot prompting significantly enhances classification performance.<n> challenges such as model reliability, interpretability, and adversarial robustness remain critical areas for future research.
arXiv Detail & Related papers (2025-06-11T18:43:51Z) - Detecting Financial Fraud with Hybrid Deep Learning: A Mix-of-Experts Approach to Sequential and Anomalous Patterns [0.0]
This study presents a hybrid architecture for credit card fraud detection that integrates a Mixture of Experts (MoE) framework with Recurrent Neural Networks (RNNs), Transformer encoders, and Autoencoders.<n>MoE framework dynamically assigns predictive responsibility among the experts, enabling adaptive and context-sensitive decision-making.<n>The proposed hybrid system offers a scalable, modular, and regulation-aware approach to detecting increasingly sophisticated fraud patterns.
arXiv Detail & Related papers (2025-04-01T20:47:18Z) - Enforcing Cybersecurity Constraints for LLM-driven Robot Agents for Online Transactions [0.0]
The integration of Large Language Models (LLMs) into autonomous robotic agents for conducting online transactions poses significant cybersecurity challenges.<n>This study aims to enforce robust cybersecurity constraints to mitigate the risks associated with data breaches, transaction fraud, and system manipulation.
arXiv Detail & Related papers (2025-03-17T01:01:10Z) - ACRIC: Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
Recent security incidents in safety-critical industries exposed how the lack of proper message authentication enables attackers to inject malicious commands or alter system behavior.<n>These shortcomings have prompted new regulations that emphasize the pressing need to strengthen cybersecurity.<n>We introduce ACRIC, a message authentication solution to secure legacy industrial communications.
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - CryptoFormalEval: Integrating LLMs and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection [41.94295877935867]
We introduce a benchmark to assess the ability of Large Language Models to autonomously identify vulnerabilities in new cryptographic protocols.
We created a dataset of novel, flawed, communication protocols and designed a method to automatically verify the vulnerabilities found by the AI agents.
arXiv Detail & Related papers (2024-11-20T14:16:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.