Language Models As or For Knowledge Bases
- URL: http://arxiv.org/abs/2110.04888v1
- Date: Sun, 10 Oct 2021 20:00:09 GMT
- Title: Language Models As or For Knowledge Bases
- Authors: Simon Razniewski, Andrew Yates, Nora Kassner, Gerhard Weikum
- Abstract summary: We identify strengths and limitations of pre-trained language models (LMs) and explicit knowledge bases (KBs)
We argue that latent LMs are not suitable as a substitute for explicit KBs, but could play a major role for augmenting and curating KBs.
- Score: 30.089955948497405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained language models (LMs) have recently gained attention for their
potential as an alternative to (or proxy for) explicit knowledge bases (KBs).
In this position paper, we examine this hypothesis, identify strengths and
limitations of both LMs and KBs, and discuss the complementary nature of the
two paradigms. In particular, we offer qualitative arguments that latent LMs
are not suitable as a substitute for explicit KBs, but could play a major role
for augmenting and curating KBs.
Related papers
- Large Language Models as Reliable Knowledge Bases? [60.25969380388974]
Large Language Models (LLMs) can be viewed as potential knowledge bases (KBs)
This study defines criteria that a reliable LLM-as-KB should meet, focusing on factuality and consistency.
strategies like ICL and fine-tuning are unsuccessful at making LLMs better KBs.
arXiv Detail & Related papers (2024-07-18T15:20:18Z) - Find The Gap: Knowledge Base Reasoning For Visual Question Answering [19.6585442152102]
We analyze knowledge-based visual question answering, for which given a question, the models need to ground it into the visual modality.
Our results demonstrate the positive impact of empowering task-specific and LLM models with supervised external and visual knowledge retrieval models.
Our findings show that though LLMs are stronger in 1-hop reasoning, they suffer in 2-hop reasoning in comparison with our fine-tuned NN model.
arXiv Detail & Related papers (2024-04-16T02:11:46Z) - Translate Meanings, Not Just Words: IdiomKB's Role in Optimizing
Idiomatic Translation with Language Models [57.60487455727155]
idioms, with their non-compositional nature, pose particular challenges for Transformer-based systems.
Traditional methods, which replace idioms using existing knowledge bases (KBs), often lack scale and context awareness.
We introduce a multilingual idiom KB (IdiomKB) developed using large LMs to address this.
This KB facilitates better translation by smaller models, such as BLOOMZ (7.1B), Alpaca (7B), and InstructGPT (6.7B)
arXiv Detail & Related papers (2023-08-26T21:38:31Z) - Cross-Lingual Question Answering over Knowledge Base as Reading
Comprehension [61.079852289005025]
Cross-lingual question answering over knowledge base (xKBQA) aims to answer questions in languages different from that of the provided knowledge base.
One of the major challenges facing xKBQA is the high cost of data annotation.
We propose a novel approach for xKBQA in a reading comprehension paradigm.
arXiv Detail & Related papers (2023-02-26T05:52:52Z) - A Review on Language Models as Knowledge Bases [55.035030134703995]
Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs)
arXiv Detail & Related papers (2022-04-12T18:35:23Z) - Prix-LM: Pretraining for Multilingual Knowledge Base Construction [59.02868906044296]
We propose a unified framework, Prix-LM, for multilingual knowledge construction and completion.
We leverage two types of knowledge, monolingual triples and cross-lingual links, extracted from existing multilingual KBs.
Experiments on standard entity-related tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction, demonstrate its effectiveness.
arXiv Detail & Related papers (2021-10-16T02:08:46Z) - Relational world knowledge representation in contextual language models:
A review [19.176173014629185]
We take a natural language processing perspective to the limitations of knowledge bases (KBs)
We propose a novel taxonomy for relational knowledge representation in contextual language models (LMs)
arXiv Detail & Related papers (2021-04-12T21:50:55Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Language Models as Knowledge Bases: On Entity Representations, Storage
Capacity, and Paraphrased Queries [35.57443199012129]
Pretrained language models have been suggested as a possible alternative or complement to structured knowledge bases.
Here, we formulate two basic requirements for treating LMs as KBs.
We explore three entity representations that allow LMs to represent millions of entities and present a detailed case study on paraphrased querying of world knowledge in LMs.
arXiv Detail & Related papers (2020-08-20T15:39:36Z) - On Expansion and Contraction of DL-Lite Knowledge Bases [9.168045898881292]
We investigate knowledge expansion and contraction for knowledge bases expressed in DL-Lite.
We show that well-known formula-based approaches are not appropriate for DL-Lite expansion and contraction.
We propose a novel formula-based approach that respects our principles and evolution is expressible in DL-Lite.
arXiv Detail & Related papers (2020-01-25T21:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.