Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
- URL: http://arxiv.org/abs/2506.21443v1
- Date: Thu, 26 Jun 2025 16:29:45 GMT
- Title: Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
- Authors: Ali Şenol, Garima Agrawal, Huan Liu,
- Abstract summary: Concept Drift (CD)-i.e., semantic or topical shifts alter the context or intent of interactions over time.<n>These shifts can obscure malicious intent or mimic normal dialogue, making accurate classification challenging.<n>We present a framework that integrates pretrained Large Language Models with structured, taskspecific insights to perform fraud and concept drift detection.
- Score: 11.917779863156097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting deceptive conversations on dynamic platforms is increasingly difficult due to evolving language patterns and Concept Drift (CD)\-i.e., semantic or topical shifts that alter the context or intent of interactions over time. These shifts can obscure malicious intent or mimic normal dialogue, making accurate classification challenging. While Large Language Models (LLMs) show strong performance in natural language tasks, they often struggle with contextual ambiguity and hallucinations in risk\-sensitive scenarios. To address these challenges, we present a Domain Knowledge (DK)\-Enhanced LLM framework that integrates pretrained LLMs with structured, task\-specific insights to perform fraud and concept drift detection. The proposed architecture consists of three main components: (1) a DK\-LLM module to detect fake or deceptive conversations; (2) a drift detection unit (OCDD) to determine whether a semantic shift has occurred; and (3) a second DK\-LLM module to classify the drift as either benign or fraudulent. We first validate the value of domain knowledge using a fake review dataset and then apply our full framework to SEConvo, a multiturn dialogue dataset that includes various types of fraud and spam attacks. Results show that our system detects fake conversations with high accuracy and effectively classifies the nature of drift. Guided by structured prompts, the LLaMA\-based implementation achieves 98\% classification accuracy. Comparative studies against zero\-shot baselines demonstrate that incorporating domain knowledge and drift awareness significantly improves performance, interpretability, and robustness in high\-stakes NLP applications.
Related papers
- Joint Detection of Fraud and Concept Drift inOnline Conversations with LLM-Assisted Judgment [11.917779863156097]
We propose a two stage detection framework that first identifies suspicious conversations using a tailored ensemble classification model.<n>To improve the reliability of detection, we incorporate a concept drift analysis step using a One Class Drift Detector (OCDD)<n>When drift is detected, a large language model (LLM) assesses whether the shift indicates fraudulent manipulation or a legitimate topic change.
arXiv Detail & Related papers (2025-05-07T22:30:53Z) - Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection [18.125287697902813]
Current Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in understanding multimodal data.<n>We present a novel framework that unlocks LVLMs' potential capabilities for deepfake detection.
arXiv Detail & Related papers (2025-03-19T03:20:03Z) - 3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds [81.14476072159049]
3D Affordance detection is a challenging problem with broad applications on various robotic tasks.<n>We reformulate the traditional affordance detection paradigm into textit Reasoning Affordance (IRAS) task.<n>We propose 3D-ADLLM, a framework designed for reasoning affordance detection in 3D open-scene.
arXiv Detail & Related papers (2025-02-27T12:29:44Z) - Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment [21.36633828492347]
Cross-Domain Multi-Modal Few-Shot Object Detection (CDMM-FSOD)<n>We introduce a meta-learning-based framework designed to leverage rich textual semantics as an auxiliary modality to achieve effective domain adaptation.<n>We evaluate the proposed method on common cross-domain object detection benchmarks and demonstrate that it significantly surpasses existing few-shot object detection approaches.
arXiv Detail & Related papers (2025-02-23T06:59:22Z) - From Objects to Events: Unlocking Complex Visual Understanding in Object Detectors via LLM-guided Symbolic Reasoning [71.41062111470414]
Current object detectors excel at entity localization and classification, yet exhibit inherent limitations in event recognition capabilities.<n>We present a novel framework that expands the capability of standard object detectors beyond mere object recognition to complex event understanding.<n>Our key innovation lies in bridging the semantic gap between object detection and event understanding without requiring expensive task-specific training.
arXiv Detail & Related papers (2025-02-09T10:30:54Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - Large Model Based Referring Camouflaged Object Detection [51.80619142347807]
Referring camouflaged object detection (Ref-COD) is a recently-proposed problem aiming to segment out specified camouflaged objects matched with a textual or visual reference.
Our motivation is to make full use of the semantic intelligence and intrinsic knowledge of recent Multimodal Large Language Models (MLLMs) to decompose this complex task in a human-like way.
We propose a large-model-based Multi-Level Knowledge-Guided multimodal method for Ref-COD termed MLKG.
arXiv Detail & Related papers (2023-11-28T13:45:09Z) - Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning [17.6573121083417]
We propose a Knowledge-Augmented Frame Semantic Parsing Architecture (KAF-SPA) to enhance semantic representation.
A Memory-based Knowledge Extraction Module (MKEM) is devised to select accurate frame knowledge and construct the continuous templates.
We also design a Task-oriented Knowledge Probing Module (TKPM) using hybrid prompts to incorporate the selected knowledge into the PLMs and adapt PLMs to the tasks of frame and argument identification.
arXiv Detail & Related papers (2023-03-25T06:41:19Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Learning Commonsense-aware Moment-Text Alignment for Fast Video Temporal
Grounding [78.71529237748018]
Grounding temporal video segments described in natural language queries effectively and efficiently is a crucial capability needed in vision-and-language fields.
Most existing approaches adopt elaborately designed cross-modal interaction modules to improve the grounding performance.
We propose a commonsense-aware cross-modal alignment framework, which incorporates commonsense-guided visual and text representations into a complementary common space.
arXiv Detail & Related papers (2022-04-04T13:07:05Z) - Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable
Semantic Segmentation [0.7696728525672148]
We propose to use a partly human-designed, partly learned set of rules to describe relations between objects of a traffic scene on a high level of abstraction.
In doing so, we improve and robustify existing deep neural networks consuming low-level sensor information.
arXiv Detail & Related papers (2021-04-19T12:51:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.