Symbolic and Language Agnostic Large Language Models
- URL: http://arxiv.org/abs/2308.14199v1
- Date: Sun, 27 Aug 2023 20:24:33 GMT
- Title: Symbolic and Language Agnostic Large Language Models
- Authors: Walid S. Saba
- Abstract summary: We argue that the relative success of large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate.
We suggest here is employing the successful bottom-up strategy in a symbolic setting, producing symbolic, language agnostic and ontologically grounded large language models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We argue that the relative success of large language models (LLMs) is not a
reflection on the symbolic vs. subsymbolic debate but a reflection on employing
an appropriate strategy of bottom-up reverse engineering of language at scale.
However, due to the subsymbolic nature of these models whatever knowledge these
systems acquire about language will always be buried in millions of
microfeatures (weights) none of which is meaningful on its own. Moreover, and
due to their stochastic nature, these models will often fail in capturing
various inferential aspects that are prevalent in natural language. What we
suggest here is employing the successful bottom-up strategy in a symbolic
setting, producing symbolic, language agnostic and ontologically grounded large
language models.
Related papers
- Analyzing The Language of Visual Tokens [48.62180485759458]
We take a natural-language-centric approach to analyzing discrete visual languages.
We show that higher token innovation drives greater entropy and lower compression, with tokens predominantly representing object parts.
We also show that visual languages lack cohesive grammatical structures, leading to higher perplexity and weaker hierarchical organization compared to natural languages.
arXiv Detail & Related papers (2024-11-07T18:59:28Z) - Reinterpreting 'the Company a Word Keeps': Towards Explainable and Ontologically Grounded Language Models [0.0]
We argue that the relative success of large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate.
We suggest employing the same successful bottom-up strategy employed in LLMs but in a symbolic setting.
arXiv Detail & Related papers (2024-06-06T20:38:35Z) - Formal Aspects of Language Modeling [74.16212987886013]
Large language models have become one of the most commonly deployed NLP inventions.
These notes are the accompaniment to the theoretical portion of the ETH Z"urich course on large language models.
arXiv Detail & Related papers (2023-11-07T20:21:42Z) - Stochastic LLMs do not Understand Language: Towards Symbolic,
Explainable and Ontologically Based LLMs [0.0]
We argue that the relative success of data-driven large language models (LLMs) is not a reflection on the symbolic vs. subsymbolic debate.
We suggest in this paper applying the effective bottom-up strategy in a symbolic setting resulting in symbolic, explainable, and ontologically grounded language models.
arXiv Detail & Related papers (2023-09-12T02:14:05Z) - Towards Explainable and Language-Agnostic LLMs: Symbolic Reverse
Engineering of Language at Scale [0.0]
Large language models (LLMs) have achieved a milestone that undenia-bly changed many held beliefs in artificial intelligence (AI)
We argue for a bottom-up reverse engineering of language in a symbolic setting.
arXiv Detail & Related papers (2023-05-30T15:15:40Z) - Beyond the limitations of any imaginable mechanism: large language
models and psycholinguistics [0.0]
Large language models provide a model for language.
They are useful as a practical tool, as an illustrative comparative, and philosophical, as a basis for recasting the relationship between language and thought.
arXiv Detail & Related papers (2023-02-28T20:49:38Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.