Hide and Seek: Fingerprinting Large Language Models with Evolutionary Learning
- URL: http://arxiv.org/abs/2408.02871v1
- Date: Tue, 6 Aug 2024 00:13:10 GMT
- Title: Hide and Seek: Fingerprinting Large Language Models with Evolutionary Learning
- Authors: Dmitri Iourovitski, Sanat Sharma, Rakshak Talwar,
- Abstract summary: We introduce a novel black-box approach for fingerprinting Large Language Model (LLM) models.
We achieve an impressive 72% accuracy in identifying the correct family of models.
This research opens new avenues for understanding LLM behavior and has significant implications for model attribution, security, and the broader field of AI transparency.
- Score: 0.40964539027092917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As content generated by Large Language Model (LLM) has grown exponentially, the ability to accurately identify and fingerprint such text has become increasingly crucial. In this work, we introduce a novel black-box approach for fingerprinting LLMs, achieving an impressive 72% accuracy in identifying the correct family of models (Such as Llama, Mistral, Gemma, etc) among a lineup of LLMs. We present an evolutionary strategy that leverages the capabilities of one LLM to discover the most salient features for identifying other LLMs. Our method employs a unique "Hide and Seek" algorithm, where an Auditor LLM generates discriminative prompts, and a Detective LLM analyzes the responses to fingerprint the target models. This approach not only demonstrates the feasibility of LLM-driven model identification but also reveals insights into the semantic manifolds of different LLM families. By iteratively refining prompts through in-context learning, our system uncovers subtle distinctions between model outputs, providing a powerful tool for LLM analysis and verification. This research opens new avenues for understanding LLM behavior and has significant implications for model attribution, security, and the broader field of AI transparency.
Related papers
- Beyond Binary: Towards Fine-Grained LLM-Generated Text Detection via Role Recognition and Involvement Measurement [51.601916604301685]
Large language models (LLMs) generate content that can undermine trust in online discourse.
Current methods often focus on binary classification, failing to address the complexities of real-world scenarios like human-AI collaboration.
To move beyond binary classification and address these challenges, we propose a new paradigm for detecting LLM-generated content.
arXiv Detail & Related papers (2024-10-18T08:14:10Z) - LLMmap: Fingerprinting For Large Language Models [15.726286532500971]
With as few as 8 interactions, LLMmap can accurately identify 42 different LLM versions with over 95% accuracy.
We discuss potential mitigations and demonstrate that, against resourceful adversaries, effective countermeasures may be challenging or even unrealizable.
arXiv Detail & Related papers (2024-07-22T17:59:45Z) - Tokenization Matters! Degrading Large Language Models through Challenging Their Tokenization [12.885866125783618]
Large Language Models (LLMs) tend to produce inaccurate responses to specific queries.
We construct an adversarial dataset, named as $textbfADT (Adrial dataset for Tokenizer)$ to challenge LLMs' tokenization.
Our empirical results reveal that our ADT is highly effective on challenging the tokenization of leading LLMs, including GPT-4o, Llama-3, Qwen2.5-max and so on.
arXiv Detail & Related papers (2024-05-27T11:39:59Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Towards Modeling Learner Performance with Large Language Models [7.002923425715133]
This paper investigates whether the pattern recognition and sequence modeling capabilities of LLMs can be extended to the domain of knowledge tracing.
We compare two approaches to using LLMs for this task, zero-shot prompting and model fine-tuning, with existing, non-LLM approaches to knowledge tracing.
While LLM-based approaches do not achieve state-of-the-art performance, fine-tuned LLMs surpass the performance of naive baseline models and perform on par with standard Bayesian Knowledge Tracing approaches.
arXiv Detail & Related papers (2024-02-29T14:06:34Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.