Generative linguistics contribution to artificial intelligence: Where this contribution lies?
- URL: http://arxiv.org/abs/2410.20221v3
- Date: Sat, 02 Nov 2024 18:46:54 GMT
- Title: Generative linguistics contribution to artificial intelligence: Where this contribution lies?
- Authors: Mohammed Q. Shormani,
- Abstract summary: The article walks the researcher/reader through the scientific theorems and rationales involved in AI which belong to GL.
It concludes that however the huge GL contribution to AI, there are still points of divergence including the nature and type of language input.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article aims to characterize Generative linguistics (GL) contribution to artificial intelligence (AI), alluding to the debate among linguists and AI scientists on whether linguistics belongs to humanities or science. In this article, I will try not to be biased as a linguist, studying the phenomenon from an independent scientific perspective. The article walks the researcher/reader through the scientific theorems and rationales involved in AI which belong from GL, specifically the Chomsky School. It, thus, provides good evidence from syntax, semantics, language faculty, Universal Grammar, computational system of human language, language acquisition, human brain, programming languages (e.g. Python), Large Language Models, and unbiased AI scientists that this contribution is huge, and that this contribution cannot be denied. It concludes that however the huge GL contribution to AI, there are still points of divergence including the nature and type of language input.
Related papers
- "On the goals of linguistic theory": Revisiting Chomskyan theories in the era of AI [0.20923359361008084]
Theoretical linguistics seeks to explain what human language is, and why.
Artificial intelligence models such as large language models are proving to have impressive linguistic capabilities.
Many are questioning what role, if any, such models should play in helping theoretical linguistics reach its ultimate research goals.
arXiv Detail & Related papers (2024-11-15T19:09:22Z) - Generative AI, Pragmatics, and Authenticity in Second Language Learning [0.0]
There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching.
However, due to how AI systems under-stand human language, they lack the lived experience to be able to use language with the same social awareness as humans.
There are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources.
arXiv Detail & Related papers (2024-10-18T11:58:03Z) - Modelling Language [0.0]
This paper argues that large language models have a valuable scientific role to play in serving as scientific models of a language.
It draws upon recent work in philosophy of science to show how large language models could serve as scientific models.
arXiv Detail & Related papers (2024-04-15T08:40:01Z) - Formal Aspects of Language Modeling [74.16212987886013]
Large language models have become one of the most commonly deployed NLP inventions.
These notes are the accompaniment to the theoretical portion of the ETH Z"urich course on large language models.
arXiv Detail & Related papers (2023-11-07T20:21:42Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Why Linguistics Will Thrive in the 21st Century: A Reply to Piantadosi
(2023) [5.2424255020469595]
We present a critical assessment of Piantadosi's claim that "Modern language models refute Chomsky's approach to language"
Despite the impressive performance and utility of large language models, humans achieve their capacity for language after exposure to several orders of magnitude less data.
We conclude that generative linguistics as a scientific discipline will remain indispensable throughout the 21st century and beyond.
arXiv Detail & Related papers (2023-08-06T23:41:14Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Dissociating language and thought in large language models [52.39241645471213]
Large Language Models (LLMs) have come closest among all models to date to mastering human language.
We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms.
Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty.
arXiv Detail & Related papers (2023-01-16T22:41:19Z) - Human Heuristics for AI-Generated Language Are Flawed [8.465228064780744]
We study whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI.
We experimentally demonstrate that these wordings make human judgment of AI-generated language predictable and manipulable.
We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI.
arXiv Detail & Related papers (2022-06-15T03:18:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.