Large Language Models Struggle to Learn Long-Tail Knowledge
- URL: http://arxiv.org/abs/2211.08411v2
- Date: Thu, 27 Jul 2023 08:01:42 GMT
- Title: Large Language Models Struggle to Learn Long-Tail Knowledge
- Authors: Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, Colin Raffel
- Abstract summary: We study the relationship between the knowledge memorized by large language models and the information in pre-training datasets scraped from the web.
In particular, we show that a language model's ability to answer a fact-based question relates to how many documents associated with that question were seen during pre-training.
- Score: 39.01608375863687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet contains a wealth of knowledge -- from the birthdays of
historical figures to tutorials on how to code -- all of which may be learned
by language models. However, while certain pieces of information are ubiquitous
on the web, others appear extremely rarely. In this paper, we study the
relationship between the knowledge memorized by large language models and the
information in pre-training datasets scraped from the web. In particular, we
show that a language model's ability to answer a fact-based question relates to
how many documents associated with that question were seen during pre-training.
We identify these relevant documents by entity linking pre-training datasets
and counting documents that contain the same entities as a given
question-answer pair. Our results demonstrate strong correlational and causal
relationships between accuracy and relevant document count for numerous
question answering datasets (e.g., TriviaQA), pre-training corpora (e.g.,
ROOTS), and model sizes (e.g., 176B parameters). Moreover, while larger models
are better at learning long-tail knowledge, we estimate that today's models
must be scaled by many orders of magnitude to reach competitive QA performance
on questions with little support in the pre-training data. Finally, we show
that retrieval-augmentation can reduce the dependence on relevant pre-training
information, presenting a promising approach for capturing the long-tail.
Related papers
- Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge [55.65162959527848]
Large language models have shown excellent performance on many knowledge-intensive tasks.
However, pretraining data tends to contain misleading and even conflicting information.
This study systematically analyze LLMs' learning preferences for data with conflicting knowledge.
arXiv Detail & Related papers (2024-10-07T06:49:41Z) - Improving Topic Relevance Model by Mix-structured Summarization and LLM-based Data Augmentation [16.170841777591345]
In most social search scenarios such as Dianping, modeling search relevance always faces two challenges.
We first take queryd with the query-based summary and the document summary without query as the input of topic relevance model.
Then, we utilize the language understanding and generation abilities of large language model (LLM) to rewrite and generate query from queries and documents in existing training data.
arXiv Detail & Related papers (2024-04-03T10:05:47Z) - Automatic Question-Answer Generation for Long-Tail Knowledge [65.11554185687258]
We propose an automatic approach to generate specialized QA datasets for tail entities.
We conduct extensive experiments by employing pretrained LLMs on our newly generated long-tail QA datasets.
arXiv Detail & Related papers (2024-03-03T03:06:31Z) - Retrieval-Generation Synergy Augmented Large Language Models [30.53260173572783]
We propose an iterative retrieval-generation collaborative framework.
We conduct experiments on four question answering datasets, including single-hop QA and multi-hop QA tasks.
arXiv Detail & Related papers (2023-10-08T12:50:57Z) - Lost in the Middle: How Language Models Use Long Contexts [88.78803442320246]
We analyze the performance of language models on two tasks that require identifying relevant information in their input contexts.
We find that performance can degrade significantly when changing the position of relevant information.
Our analysis provides a better understanding of how language models use their input context and provides new evaluation protocols for future long-context language models.
arXiv Detail & Related papers (2023-07-06T17:54:11Z) - Towards Complex Document Understanding By Discrete Reasoning [77.91722463958743]
Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language.
We introduce a new Document VQA dataset, named TAT-DQA, which consists of 3,067 document pages and 16,558 question-answer pairs.
We develop a novel model named MHST that takes into account the information in multi-modalities, including text, layout and visual image, to intelligently address different types of questions.
arXiv Detail & Related papers (2022-07-25T01:43:19Z) - KILT: a Benchmark for Knowledge Intensive Language Tasks [102.33046195554886]
We present a benchmark for knowledge-intensive language tasks (KILT)
All tasks in KILT are grounded in the same snapshot of Wikipedia.
We find that a shared dense vector index coupled with a seq2seq model is a strong baseline.
arXiv Detail & Related papers (2020-09-04T15:32:19Z) - REALM: Retrieval-Augmented Language Model Pre-Training [37.3178586179607]
We augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia.
For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner.
We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA)
arXiv Detail & Related papers (2020-02-10T18:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.