Talking About Large Language Models
- URL: http://arxiv.org/abs/2212.03551v1
- Date: Wed, 7 Dec 2022 10:01:44 GMT
- Title: Talking About Large Language Models
- Authors: Murray Shanahan
- Abstract summary: The more adept large language models become, the more vulnerable we become to anthropomorphism.
This paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work.
The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence.
- Score: 7.005266019853958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thanks to rapid progress in artificial intelligence, we have entered an era
when technology and philosophy intersect in interesting ways. Sitting squarely
at the centre of this intersection are large language models (LLMs). The more
adept LLMs become at mimicking human language, the more vulnerable we become to
anthropomorphism, to seeing the systems in which they are embedded as more
human-like than they really are. This trend is amplified by the natural
tendency to use philosophically loaded terms, such as "knows", "believes", and
"thinks", when describing these systems. To mitigate this trend, this paper
advocates the practice of repeatedly stepping back to remind ourselves of how
LLMs, and the systems of which they form a part, actually work. The hope is
that increased scientific precision will encourage more philosophical nuance in
the discourse around artificial intelligence, both within the field and in the
public sphere.
Related papers
- Should We Fear Large Language Models? A Structural Analysis of the Human
Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens
of Heidegger's Philosophy [0.0]
This study investigates the capabilities and risks of Large Language Models (LLMs)
It uses the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand"
Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities.
arXiv Detail & Related papers (2024-03-05T19:40:53Z) - LLMs and the Human Condition [0.0]
The model integrates three established theories of human decision-making from philosophy, sociology, and computer science.
It describes what is commonly thought of as "reactive systems" which is the position taken by many philosophers and indeed many contemporary AI researchers.
arXiv Detail & Related papers (2024-02-13T12:04:43Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Word meaning in minds and machines [18.528929583956725]
We argue that contemporary NLP systems are fairly successful models of human word similarity, but they fall short in many other respects.
Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words.
We discuss more promising approaches to grounding NLP systems and argue that they will be more successful with a more human-like, conceptual basis for word meaning.
arXiv Detail & Related papers (2020-08-04T18:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.