From task structures to world models: What do LLMs know?
- URL: http://arxiv.org/abs/2310.04276v1
- Date: Fri, 6 Oct 2023 14:21:59 GMT
- Title: From task structures to world models: What do LLMs know?
- Authors: Ilker Yildirim, L.A. Paul
- Abstract summary: In what sense does a large language model have knowledge?
We answer by granting LLMs "instrumental knowledge"; knowledge defined by a certain set of abilities.
We then ask how such knowledge is related to the more ordinary, "worldly" knowledge exhibited by human agents, and explore this in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In what sense does a large language model have knowledge? The answer to this
question extends beyond the capabilities of a particular AI system, and
challenges our assumptions about the nature of knowledge and intelligence. We
answer by granting LLMs "instrumental knowledge"; knowledge defined by a
certain set of abilities. We then ask how such knowledge is related to the more
ordinary, "worldly" knowledge exhibited by human agents, and explore this in
terms of the degree to which instrumental knowledge can be said to incorporate
the structured world models of cognitive science. We discuss ways LLMs could
recover degrees of worldly knowledge, and suggest such recovery will be
governed by an implicit, resource-rational tradeoff between world models and
task demands.
Related papers
- Large Language Models as a Tool for Mining Object Knowledge [0.42970700836450487]
Large language models fall short as trustworthy intelligent systems due to opacity of basis for their answers and tendency to confabulate facts when questioned.
This paper investigates explicit knowledge about common artifacts in the everyday world.
We produce a repository of data on the parts and materials of about 2,300 objects and their subtypes.
This contribution to knowledge mining should prove useful to AI research on reasoning about object structure and composition.
arXiv Detail & Related papers (2024-10-16T18:46:02Z) - Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - What's in an embedding? Would a rose by any embedding smell as sweet? [0.0]
Large Language Models (LLMs) are often criticized for lacking true "understanding" and the ability to "reason" with their knowledge.
We suggest that LLMs do develop a kind of empirical "understanding" that is "geometry"-like, which seems adequate for a range of applications in NLP.
To overcome these limitations, we suggest that LLMs should be integrated with an "algebraic" representation of knowledge that includes symbolic AI elements.
arXiv Detail & Related papers (2024-06-11T01:10:40Z) - KnowTuning: Knowledge-aware Fine-tuning for Large Language Models [83.5849717262019]
We propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs.
KnowTuning generates more facts with less factual error rate under fine-grained facts evaluation.
arXiv Detail & Related papers (2024-02-17T02:54:32Z) - Large Knowledge Model: Perspectives and Challenges [37.42721596964844]
emphLarge Language Models (LLMs) epitomize the pre-training of extensive, sequence-based world knowledge into neural networks.
This article explores large models through the lens of "knowledge"
Considering the intricate nature of human knowledge, we advocate for the creation of emphLarge Knowledge Models (LKM)
arXiv Detail & Related papers (2023-12-05T12:07:30Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.