Human-Readable Fingerprint for Large Language Models
- URL: http://arxiv.org/abs/2312.04828v2
- Date: Wed, 7 Feb 2024 11:01:25 GMT
- Title: Human-Readable Fingerprint for Large Language Models
- Authors: Boyi Zeng, Chenghu Zhou, Xinbing Wang, Zhouhan Lin
- Abstract summary: We introduce a human-readable fingerprint for large language models (LLMs)
Our method generates a dog image as an identity fingerprint for an LLM, where the dog's appearance strongly indicates the LLM's base model.
- Score: 47.952699246648045
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Protecting the copyright of large language models (LLMs) has become crucial
due to their resource-intensive training and accompanying carefully designed
licenses. However, identifying the original base model of an LLM is challenging
due to potential parameter alterations. In this study, we introduce a
human-readable fingerprint for LLMs that uniquely identifies the base model
without exposing model parameters or interfering with training. We first
observe that the vector direction of LLM parameters remains stable after the
model has converged during pretraining, showing negligible perturbations
through subsequent training steps, including continued pretraining, supervised
fine-tuning (SFT), and RLHF, which makes it a sufficient condition to identify
the base model. The necessity is validated by continuing to train an LLM with
an extra term to drive away the model parameters' direction and the model
becomes damaged. However, this direction is vulnerable to simple attacks like
dimension permutation or matrix rotation, which significantly change it without
affecting performance. To address this, leveraging the Transformer structure,
we systematically analyze potential attacks and define three invariant terms
that identify an LLM's base model. We make these invariant terms human-readable
by mapping them to a Gaussian vector using a convolutional encoder and then
converting it into a natural image with StyleGAN2. Our method generates a dog
image as an identity fingerprint for an LLM, where the dog's appearance
strongly indicates the LLM's base model. The fingerprint provides intuitive
information for qualitative discrimination, while the invariant terms can be
employed for quantitative and precise verification. Experimental results across
various LLMs demonstrate the effectiveness of our method.
Related papers
- Model Surgery: Modulating LLM's Behavior Via Simple Parameter Editing [63.20133320524577]
Large Language Models (LLMs) have demonstrated great potential as generalist assistants.
It is crucial that these models exhibit desirable behavioral traits, such as non-toxicity and resilience against jailbreak attempts.
In this paper, we observe that directly editing a small subset of parameters can effectively modulate specific behaviors of LLMs.
arXiv Detail & Related papers (2024-07-11T17:52:03Z) - A Fingerprint for Large Language Models [10.63985246068255]
We propose a novel black-box fingerprinting technique for large language models (LLMs)
Experimental results indicate that the proposed technique achieves superior performance in ownership verification and robustness against PEFT attacks.
arXiv Detail & Related papers (2024-07-01T12:25:42Z) - Aligning Large Language Models via Fine-grained Supervision [20.35000061196631]
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations.
Current approaches focus on using reinforcement learning with human feedback to improve model alignment.
We propose a method to enhance LLM alignment through fine-grained token-level supervision.
arXiv Detail & Related papers (2024-06-04T20:21:45Z) - ProFLingo: A Fingerprinting-based Intellectual Property Protection Scheme for Large Language Models [18.46904928949022]
We propose ProFLingo, a black-box fingerprinting-based IP protection scheme for large language models (LLMs)
ProFLingo generates queries that elicit specific responses from an original model, thereby establishing unique fingerprints.
Our scheme assesses the effectiveness of these queries on a suspect model to determine whether it has been derived from the original model.
arXiv Detail & Related papers (2024-05-03T20:00:40Z) - Temporal Scaling Law for Large Language Models [24.12384260752973]
We propose the novel concept of Temporal Scaling Law, studying how the test loss of an LLM evolves as the training steps scale up.
In contrast to modeling the test loss as a whole in a coarse-grained manner, we break it down and dive into the fine-grained test loss of each token position.
We derive the much more precise temporal scaling law by studying the temporal patterns of the parameters in the dynamic hyperbolic-law.
arXiv Detail & Related papers (2024-04-27T05:49:11Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models [52.98743860365194]
We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN)
At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.
This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents.
arXiv Detail & Related papers (2024-01-02T18:53:13Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.