Tracing Knowledge in Language Models Back to the Training Data
- URL: http://arxiv.org/abs/2205.11482v2
- Date: Tue, 24 May 2022 05:19:09 GMT
- Title: Tracing Knowledge in Language Models Back to the Training Data
- Authors: Ekin Aky\"urek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian
Tenney, Jacob Andreas, Kelvin Guu
- Abstract summary: We introduce a new benchmark for fact tracing: tracing language models' assertions back to the training examples that provided evidence for those predictions.
We evaluate influence methods for fact tracing, using well-understood information retrieval metrics.
- Score: 39.02793789536856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural language models (LMs) have been shown to memorize a great deal of
factual knowledge. But when an LM generates an assertion, it is often difficult
to determine where it learned this information and whether it is true. In this
paper, we introduce a new benchmark for fact tracing: tracing language models'
assertions back to the training examples that provided evidence for those
predictions. Prior work has suggested that dataset-level influence methods
might offer an effective framework for tracing predictions back to training
data. However, such methods have not been evaluated for fact tracing, and
researchers primarily have studied them through qualitative analysis or as a
data cleaning technique for classification/regression tasks. We present the
first experiments that evaluate influence methods for fact tracing, using
well-understood information retrieval (IR) metrics. We compare two popular
families of influence methods -- gradient-based and embedding-based -- and show
that neither can fact-trace reliably; indeed, both methods fail to outperform
an IR baseline (BM25) that does not even access the LM. We explore why this
occurs (e.g., gradient saturation) and demonstrate that existing influence
methods must be improved significantly before they can reliably attribute
factual predictions in LMs.
Related papers
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - Scalable Influence and Fact Tracing for Large Language Model Pretraining [14.598556308631018]
Training data attribution (TDA) methods aim to attribute model outputs back to specific training examples.
This paper refines existing gradient-based methods to work effectively at scale.
arXiv Detail & Related papers (2024-10-22T20:39:21Z) - Probing Language Models for Pre-training Data Detection [11.37731401086372]
We propose to utilize the probing technique for pre-training data detection by examining the model's internal activations.
Our method is simple and effective and leads to more trustworthy pre-training data detection.
arXiv Detail & Related papers (2024-06-03T13:58:04Z) - Debiasing Machine Unlearning with Counterfactual Examples [31.931056076782202]
We analyze the causal factors behind the unlearning process and mitigate biases at both data and algorithmic levels.
We introduce an intervention-based approach, where knowledge to forget is erased with a debiased dataset.
Our method outperforms existing machine unlearning baselines on evaluation metrics.
arXiv Detail & Related papers (2024-04-24T09:33:10Z) - Pre-training and Diagnosing Knowledge Base Completion Models [58.07183284468881]
We introduce and analyze an approach to knowledge transfer from one collection of facts to another without the need for entity or relation matching.
The main contribution is a method that can make use of large-scale pre-training on facts, which were collected from unstructured text.
To understand the obtained pre-trained models better, we then introduce a novel dataset for the analysis of pre-trained models for Open Knowledge Base Completion.
arXiv Detail & Related papers (2024-01-27T15:20:43Z) - Unlearning Traces the Influential Training Data of Language Models [31.33791825286853]
This paper presents UnTrac: unlearning traces the influence of a training dataset on the model's performance.
We propose a more scalable approach, UnTrac-Inv, which unlearns a test dataset and evaluates the unlearned model on training datasets.
arXiv Detail & Related papers (2024-01-26T23:17:31Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.