Do PLMs Know and Understand Ontological Knowledge?
- URL: http://arxiv.org/abs/2309.05936v1
- Date: Tue, 12 Sep 2023 03:20:50 GMT
- Title: Do PLMs Know and Understand Ontological Knowledge?
- Authors: Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, Kewei Tu
- Abstract summary: Ontological knowledge comprises classes and properties and their relationships.
It is significant to explore whether Pretrained Language Models (PLMs) know and understand such knowledge.
Our results show that PLMs can memorize certain ontological knowledge and utilize implicit knowledge in reasoning.
- Score: 72.48752398867651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ontological knowledge, which comprises classes and properties and their
relationships, is integral to world knowledge. It is significant to explore
whether Pretrained Language Models (PLMs) know and understand such knowledge.
However, existing PLM-probing studies focus mainly on factual knowledge,
lacking a systematic probing of ontological knowledge. In this paper, we focus
on probing whether PLMs store ontological knowledge and have a semantic
understanding of the knowledge rather than rote memorization of the surface
form. To probe whether PLMs know ontological knowledge, we investigate how well
PLMs memorize: (1) types of entities; (2) hierarchical relationships among
classes and properties, e.g., Person is a subclass of Animal and Member of
Sports Team is a subproperty of Member of ; (3) domain and range constraints of
properties, e.g., the subject of Member of Sports Team should be a Person and
the object should be a Sports Team. To further probe whether PLMs truly
understand ontological knowledge beyond memorization, we comprehensively study
whether they can reliably perform logical reasoning with given knowledge
according to ontological entailment rules. Our probing results show that PLMs
can memorize certain ontological knowledge and utilize implicit knowledge in
reasoning. However, both the memorizing and reasoning performances are less
than perfect, indicating incomplete knowledge and understanding.
Related papers
- Large Language Models as a Tool for Mining Object Knowledge [0.42970700836450487]
Large language models fall short as trustworthy intelligent systems due to opacity of basis for their answers and tendency to confabulate facts when questioned.
This paper investigates explicit knowledge about common artifacts in the everyday world.
We produce a repository of data on the parts and materials of about 2,300 objects and their subtypes.
This contribution to knowledge mining should prove useful to AI research on reasoning about object structure and composition.
arXiv Detail & Related papers (2024-10-16T18:46:02Z) - Defining Knowledge: Bridging Epistemology and Large Language Models [37.41866724160848]
We review standard definitions of knowledge in NLP and we formalize interpretations applicable to large language models (LLMs)
We conduct a survey of 100 professional philosophers and computer scientists to compare their preferences in knowledge definitions and their views on whether LLMs can really be said to know.
arXiv Detail & Related papers (2024-10-03T14:01:01Z) - Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models [51.72963030032491]
Knowledge documents for large language models (LLMs) may conflict with the memory of LLMs due to outdated or incorrect knowledge.
We construct a new dataset, dubbed KNOT, for knowledge conflict resolution examination in the form of question answering.
arXiv Detail & Related papers (2024-04-04T16:40:11Z) - Large Knowledge Model: Perspectives and Challenges [37.42721596964844]
emphLarge Language Models (LLMs) epitomize the pre-training of extensive, sequence-based world knowledge into neural networks.
This article explores large models through the lens of "knowledge"
Considering the intricate nature of human knowledge, we advocate for the creation of emphLarge Knowledge Models (LKM)
arXiv Detail & Related papers (2023-12-05T12:07:30Z) - From task structures to world models: What do LLMs know? [0.0]
In what sense does a large language model have knowledge?
We answer by granting LLMs "instrumental knowledge"; knowledge defined by a certain set of abilities.
We then ask how such knowledge is related to the more ordinary, "worldly" knowledge exhibited by human agents, and explore this in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science.
arXiv Detail & Related papers (2023-10-06T14:21:59Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z) - COPEN: Probing Conceptual Knowledge in Pre-trained Language Models [60.10147136876669]
Conceptual knowledge is fundamental to human cognition and knowledge bases.
Existing knowledge probing works only focus on factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge.
We design three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts.
For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark.
arXiv Detail & Related papers (2022-11-08T08:18:06Z) - KMIR: A Benchmark for Evaluating Knowledge Memorization, Identification
and Reasoning Abilities of Language Models [28.82149012250609]
We propose a benchmark, named Knowledge Memorization, Identification, and Reasoning test (KMIR)
KMIR covers 3 types of knowledge, including general knowledge, domain-specific knowledge, and commonsense, and provides 184,348 well-designed questions.
Preliminary experiments with various representative pre-training language models on KMIR reveal many interesting phenomenons.
arXiv Detail & Related papers (2022-02-28T03:52:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.