Origin Tracing and Detecting of LLMs
- URL: http://arxiv.org/abs/2304.14072v1
- Date: Thu, 27 Apr 2023 10:05:57 GMT
- Title: Origin Tracing and Detecting of LLMs
- Authors: Linyang Li, Pengyu Wang, Ke Ren, Tianxiang Sun, Xipeng Qiu
- Abstract summary: We propose an effective method to trace and detect AI-generated contexts.
Our proposed method works under both white-box and black-box settings.
We construct extensive experiments to examine whether we can trace the origins of given texts.
- Score: 46.02811367717774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The extraordinary performance of large language models (LLMs) heightens the
importance of detecting whether the context is generated by an AI system. More
importantly, while more and more companies and institutions release their LLMs,
the origin can be hard to trace. Since LLMs are heading towards the time of
AGI, similar to the origin tracing in anthropology, it is of great importance
to trace the origin of LLMs. In this paper, we first raise the concern of the
origin tracing of LLMs and propose an effective method to trace and detect
AI-generated contexts. We introduce a novel algorithm that leverages the
contrastive features between LLMs and extracts model-wise features to trace the
text origins. Our proposed method works under both white-box and black-box
settings therefore can be widely generalized to detect various LLMs.(e.g. can
be generalized to detect GPT-3 models without the GPT-3 models). Also, our
proposed method requires only limited data compared with the supervised
learning methods and can be extended to trace new-coming model origins. We
construct extensive experiments to examine whether we can trace the origins of
given texts. We provide valuable observations based on the experimental
results, such as the difficulty level of AI origin tracing, and the AI origin
similarities, and call for ethical concerns of LLM providers. We are releasing
all codes and data as a toolkit and benchmark for future AI origin tracing and
detecting studies. \footnote{We are releasing all available resource at
\url{https://github.com/OpenLMLab/}.}
Related papers
- AttnTrace: Attention-based Context Traceback for Long-Context LLMs [30.472252134918815]
We propose AttnTrace, a new context traceback method based on the attention weights produced by an LLM for a prompt.<n>The results demonstrate that AttnTrace is more accurate and efficient than existing state-of-the-art context traceback methods.
arXiv Detail & Related papers (2025-08-05T17:56:51Z) - Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs [19.08691637612329]
Machine unlearning (MU) for large language models (LLMs) seeks to remove specific undesirable data or knowledge from a trained model.<n>We identify a new vulnerability post-unlearning: unlearning trace detection.<n>We show that forget-relevant prompts enable over 90% accuracy in detecting unlearning traces across all model sizes.
arXiv Detail & Related papers (2025-06-16T21:03:51Z) - TracLLM: A Generic Framework for Attributing Long Context LLMs [34.802736332993994]
We develop TracLLM, the first generic context traceback framework tailored to long context LLMs.<n>Our framework can improve the effectiveness and efficiency of existing feature attribution methods.<n>Our evaluation results show TracLLM can effectively identify texts in a long context that lead to the output of an LLM.
arXiv Detail & Related papers (2025-06-04T17:48:16Z) - Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers [74.17516978246152]
Large language models (LLMs) have been widely integrated into information retrieval to advance traditional techniques.<n>We propose EXSEARCH, an agentic search framework, where the LLM learns to retrieve useful information as the reasoning unfolds.<n>Experiments on four knowledge-intensive benchmarks show that EXSEARCH substantially outperforms baselines.
arXiv Detail & Related papers (2025-05-26T15:27:55Z) - Learning on LLM Output Signatures for gray-box LLM Behavior Analysis [52.81120759532526]
Large Language Models (LLMs) have achieved widespread adoption, yet our understanding of their behavior remains limited.
We develop a transformer-based approach to process that theoretically guarantees approximation of existing techniques.
Our approach achieves superior performance on hallucination and data contamination detection in gray-box settings.
arXiv Detail & Related papers (2025-03-18T09:04:37Z) - A Comprehensive Analysis on LLM-based Node Classification Algorithms [21.120619437937382]
We develop a comprehensive and testbed for node classification using Large Language Models (LLMs)
It includes ten datasets, eight LLM-based algorithms, and three learning paradigms, and is designed for easy extension with new methods and datasets.
We conduct extensive experiments, training and evaluating over 2,200 models, to determine the key settings that affect performance.
Our findings uncover eight insights, e.g., LLM-based methods can significantly outperform traditional methods in a semi-supervised setting, while the advantage is marginal in a supervised setting.
arXiv Detail & Related papers (2025-02-02T15:56:05Z) - AD-LLM: Benchmarking Large Language Models for Anomaly Detection [50.57641458208208]
This paper introduces AD-LLM, the first benchmark that evaluates how large language models can help with anomaly detection.
We examine three key tasks: zero-shot detection, using LLMs' pre-trained knowledge to perform AD without tasks-specific training; data augmentation, generating synthetic data and category descriptions to improve AD models; and model selection, using LLMs to suggest unsupervised AD models.
arXiv Detail & Related papers (2024-12-15T10:22:14Z) - Robust Detection of LLM-Generated Text: A Comparative Analysis [0.276240219662896]
Large language models can be widely integrated into many aspects of life, and their output can quickly fill all network resources.
It becomes increasingly important to develop powerful detectors for the generated text.
This detector is essential to prevent the potential misuse of these technologies and to protect areas such as social media from the negative effects.
arXiv Detail & Related papers (2024-11-09T18:27:15Z) - GigaCheck: Detecting LLM-generated Content [72.27323884094953]
In this work, we investigate the task of generated text detection by proposing the GigaCheck.
Our research explores two approaches: (i) distinguishing human-written texts from LLM-generated ones, and (ii) detecting LLM-generated intervals in Human-Machine collaborative texts.
Specifically, we use a fine-tuned general-purpose LLM in conjunction with a DETR-like detection model, adapted from computer vision, to localize AI-generated intervals within text.
arXiv Detail & Related papers (2024-10-31T08:30:55Z) - Are you still on track!? Catching LLM Task Drift with Activations [55.75645403965326]
Task drift allows attackers to exfiltrate data or influence the LLM's output for other users.
We show that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set.
We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions.
arXiv Detail & Related papers (2024-06-02T16:53:21Z) - SPOT: Text Source Prediction from Originality Score Thresholding [6.790905400046194]
countermeasures aim at detecting misinformation, usually involve domain specific models trained to recognize the relevance of any information.
Instead of evaluating the validity of the information, we propose to investigate LLM generated text from the perspective of trust.
arXiv Detail & Related papers (2024-05-30T21:51:01Z) - ReMoDetect: Reward Models Recognize Aligned LLM's Generations [55.06804460642062]
Large language models (LLMs) generate human-preferable texts.
In this paper, we identify the common characteristics shared by these models.
We propose two training schemes to further improve the detection ability of the reward model.
arXiv Detail & Related papers (2024-05-27T17:38:33Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - LLM-Detector: Improving AI-Generated Chinese Text Detection with
Open-Source LLM Instruction Tuning [4.328134379418151]
Existing AI-generated text detection models are prone to in-domain over-fitting.
We propose LLM-Detector, a novel method for both document-level and sentence-level text detection.
arXiv Detail & Related papers (2024-02-02T05:54:12Z) - Measuring Distributional Shifts in Text: The Advantage of Language
Model-Based Embeddings [11.393822909537796]
An essential part of monitoring machine learning models in production is measuring input and output data drift.
Recent advancements in large language models (LLMs) indicate their effectiveness in capturing semantic relationships.
We propose a clustering-based algorithm for measuring distributional shifts in text data by exploiting such embeddings.
arXiv Detail & Related papers (2023-12-04T20:46:48Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z) - Implicit meta-learning may lead language models to trust more reliable sources [9.073765860925395]
We introduce random strings ("tags") as indicators of usefulness in a synthetic fine-tuning dataset.
Fine-tuning on this dataset leads to implicit meta-learning (IML)
We reflect on what our results might imply about capabilities, risks, and controllability of future AI systems.
arXiv Detail & Related papers (2023-10-23T15:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.