Should We Fear Large Language Models? A Structural Analysis of the Human
Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens
of Heidegger's Philosophy
- URL: http://arxiv.org/abs/2403.03288v1
- Date: Tue, 5 Mar 2024 19:40:53 GMT
- Title: Should We Fear Large Language Models? A Structural Analysis of the Human
Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens
of Heidegger's Philosophy
- Authors: Jianqiiu Zhang
- Abstract summary: This study investigates the capabilities and risks of Large Language Models (LLMs)
It uses the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand"
Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the rapidly evolving field of Large Language Models (LLMs), there is a
critical need to thoroughly analyze their capabilities and risks. Central to
our investigation are two novel elements. Firstly, it is the innovative
parallels between the statistical patterns of word relationships within LLMs
and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand," which
encapsulate the utilitarian and scientific altitudes humans employ in
interacting with the world. This comparison lays the groundwork for positioning
LLMs as the digital counterpart to the Faculty of Verbal Knowledge, shedding
light on their capacity to emulate certain facets of human reasoning. Secondly,
a structural analysis of human reasoning, viewed through Heidegger's notion of
truth as "unconcealment" is conducted This foundational principle enables us to
map out the inputs and outputs of the reasoning system and divide reasoning
into four distinct categories. Respective cognitive faculties are delineated,
allowing us to place LLMs within the broader schema of human reasoning, thus
clarifying their strengths and inherent limitations. Our findings reveal that
while LLMs possess the capability for Direct Explicative Reasoning and Pseudo
Rational Reasoning, they fall short in authentic rational reasoning and have no
creative reasoning capabilities, due to the current lack of many analogous AI
models such as the Faculty of Judgement. The potential and risks of LLMs when
they are augmented with other AI technologies are also evaluated. The results
indicate that although LLMs have achieved proficiency in some reasoning
abilities, the aspiration to match or exceed human intellectual capabilities is
yet unattained. This research not only enriches our comprehension of LLMs but
also propels forward the discourse on AI's potential and its bounds, paving the
way for future explorations into AI's evolving landscape.
Related papers
- Beyond Human Norms: Unveiling Unique Values of Large Language Models through Interdisciplinary Approaches [69.73783026870998]
This work proposes a novel framework, ValueLex, to reconstruct Large Language Models' unique value system from scratch.
Based on Lexical Hypothesis, ValueLex introduces a generative approach to elicit diverse values from 30+ LLMs.
We identify three core value dimensions, Competence, Character, and Integrity, each with specific subdimensions, revealing that LLMs possess a structured, albeit non-human, value system.
arXiv Detail & Related papers (2024-04-19T09:44:51Z) - Comparing Rationality Between Large Language Models and Humans: Insights and Open Questions [6.201550639431176]
This paper focuses on the burgeoning prominence of large language models (LLMs)
We underscore the pivotal role of Reinforcement Learning from Human Feedback (RLHF) in augmenting LLMs' rationality and decision-making prowess.
arXiv Detail & Related papers (2024-03-14T18:36:04Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Large Language Models: The Need for Nuance in Current Debates and a
Pragmatic Perspective on Understanding [1.3654846342364308]
Large Language Models (LLMs) are unparalleled in their ability to generate grammatically correct, fluent text.
This position paper critically assesses three points recurring in critiques of LLM capacities.
We outline a pragmatic perspective on the issue of real' understanding and intentionality in LLMs.
arXiv Detail & Related papers (2023-10-30T15:51:04Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Are LLMs the Master of All Trades? : Exploring Domain-Agnostic Reasoning
Skills of LLMs [0.0]
This study aims to investigate the performance of large language models (LLMs) on different reasoning tasks.
My findings indicate that LLMs excel at analogical and moral reasoning, yet struggle to perform as proficiently on spatial reasoning tasks.
arXiv Detail & Related papers (2023-03-22T22:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.