Voluminous yet Vacuous? Semantic Capital in an Age of Large Language
Models
- URL: http://arxiv.org/abs/2306.01773v1
- Date: Mon, 29 May 2023 09:26:28 GMT
- Title: Voluminous yet Vacuous? Semantic Capital in an Age of Large Language
Models
- Authors: Luca Nannini
- Abstract summary: Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text.
This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Large Language Models (LLMs) have emerged as transformative forces in the
realm of natural language processing, wielding the power to generate human-like
text. However, despite their potential for content creation, they carry the
risk of eroding our Semantic Capital (SC) - the collective knowledge within our
digital ecosystem - thereby posing diverse social epistemic challenges. This
paper explores the evolution, capabilities, and limitations of these models,
while highlighting ethical concerns they raise. The study contribution is
two-fold: first, it is acknowledged that, withstanding the challenges of
tracking and controlling LLM impacts, it is necessary to reconsider our
interaction with these AI technologies and the narratives that form public
perception of them. It is argued that before achieving this goal, it is
essential to confront a potential deontological tipping point in an increasing
AI-driven infosphere. This goes beyond just adhering to AI ethical norms or
regulations and requires understanding the spectrum of social epistemic risks
LLMs might bring to our collective SC. Secondly, building on Luciano Floridi's
taxonomy for SC risks, those are mapped within the functionality and
constraints of LLMs. By this outlook, we aim to protect and enrich our SC while
fostering a collaborative environment between humans and AI that augments human
intelligence rather than replacing it.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach [45.74830585715129]
We suggest extending the Social Transparency (ST) framework to address the risks of social misattributions in Large Language Models (LLMs)
LLMs may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors.
We propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users.
arXiv Detail & Related papers (2024-03-26T17:02:42Z) - Should We Fear Large Language Models? A Structural Analysis of the Human
Reasoning System for Elucidating LLM Capabilities and Risks Through the Lens
of Heidegger's Philosophy [0.0]
This study investigates the capabilities and risks of Large Language Models (LLMs)
It uses the innovative parallels between the statistical patterns of word relationships within LLMs and Martin Heidegger's concepts of "ready-to-hand" and "present-at-hand"
Our findings reveal that while LLMs possess the capability for Direct Explicative Reasoning and Pseudo Rational Reasoning, they fall short in authentic rational reasoning and have no creative reasoning capabilities.
arXiv Detail & Related papers (2024-03-05T19:40:53Z) - Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing
Security in Large Language Models [3.9490749767170636]
Large language models (LLMs) have revolutionized text generation, translation, and question-answering tasks.
Despite their widespread use, LLMs present challenges such as ethical dilemmas when models are compelled to respond inappropriately.
This paper addresses these challenges by introducing a multi-pronged approach that includes: 1) filtering sensitive vocabulary from user input to prevent unethical responses; 2) detecting role-playing to halt interactions that could lead to 'prison break' scenarios; and 4) extending these methodologies to various LLM derivatives like Multi-Model Large Language Models (MLLMs)
arXiv Detail & Related papers (2024-01-27T08:09:33Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models [6.867309936992639]
Large language models (LLMs) serve people in the form of AI-generated content (AIGC)
It is difficult to guarantee the authenticity and reliability of AIGC learning data.
There are also hidden dangers of privacy disclosure in distributed AI training.
arXiv Detail & Related papers (2023-10-10T03:18:26Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.