Social Evolution of Published Text and The Emergence of Artificial Intelligence Through Large Language Models and The Problem of Toxicity and Bias
- URL: http://arxiv.org/abs/2402.07166v2
- Date: Fri, 17 May 2024 07:12:12 GMT
- Title: Social Evolution of Published Text and The Emergence of Artificial Intelligence Through Large Language Models and The Problem of Toxicity and Bias
- Authors: Arifa Khan, P. Saravanan, S. K Venkatesan,
- Abstract summary: We provide a birds eye view of the rapid developments in AI and Deep Learning that has led to the emergence of AI in Large Language Models.
We point out toxicity, bias, memorization, sycophancy, logical inconsistencies, that exist just as a warning to the overly optimistic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We provide a birds eye view of the rapid developments in AI and Deep Learning that has led to the path-breaking emergence of AI in Large Language Models. The aim of this study is to place all these developments in a pragmatic broader historical social perspective without any exaggerations while at the same time without any pessimism that created the AI winter in the 1970s to 1990s. We also at the same time point out toxicity, bias, memorization, sycophancy, logical inconsistencies, hallucinations that exist just as a warning to the overly optimistic. We note here that just as this emergence of AI seems to occur at a threshold point in the number of neural connections or weights, it has also been observed that human brain and especially the cortex region is nothing special or extraordinary but simply a case of scaled-up version of the primate brain and that even the human intelligence seems like an emergent phenomena of scale.
Related papers
- A Definition of AGI [208.25193480759026]
The lack of a concrete definition for Artificial General Intelligence obscures the gap between today's specialized AI and human-level cognition.<n>This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult.
arXiv Detail & Related papers (2025-10-21T01:28:35Z) - Hallucinating with AI: AI Psychosis as Distributed Delusions [0.0]
generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create false outputs.<n>In popular terminology, these have been dubbed AI hallucinations.<n>I argue that when viewed through the lens of distributed cognition theory, we can better see the ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge.
arXiv Detail & Related papers (2025-08-27T05:51:19Z) - Adopting a human developmental visual diet yields robust, shape-based AI vision [0.0]
Despite years of research, a striking misalignment between artificial intelligence (AI) systems and human vision persists.<n>We take inspiration from how human vision develops from early infancy into adulthood.<n>We show that guiding AI systems through this human-inspired curriculum produces models that closely align with human behaviour.
arXiv Detail & Related papers (2025-07-03T20:52:08Z) - A Study on Neuro-Symbolic Artificial Intelligence: Healthcare Perspectives [2.5782420501870296]
Symbolic AI excels in reasoning, explainability, and knowledge representation but faces challenges in processing complex real-world data with noise.
Deep learning (Black-Box systems) research breakthroughs in neural networks are notable, yet they lack reasoning and interpretability.
Neuro-symbolic AI (NeSy) attempts to bridge this gap by integrating logical reasoning into neural networks, enabling them to learn and reason with symbolic representations.
arXiv Detail & Related papers (2025-03-23T21:33:38Z) - Semantic Web -- A Forgotten Wave of Artificial Intelligence? [0.362565288307551]
The rise of the Semantic Web is based on knowledge representation, logic, and reasoning.
ChatGPT has reignited AI enthusiasm, built on deep learning and advanced neural models.
The Semantic Web aimed to transform the World Wide Web into an ecosystem where AI could reason, understand, and act.
arXiv Detail & Related papers (2025-03-20T12:55:48Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Cognition is All You Need -- The Next Layer of AI Above Large Language
Models [0.0]
We present Cognitive AI, a framework for neurosymbolic cognition outside of large language models.
We propose that Cognitive AI is a necessary precursor for the evolution of the forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.
We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.
arXiv Detail & Related papers (2024-03-04T16:11:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Suffering Toasters -- A New Self-Awareness Test for AI [0.0]
We argue that all current intelligence tests are insufficient to point to the existence or lack of intelligence.
We propose a new approach to test for artificial self-awareness and outline a possible implementation.
arXiv Detail & Related papers (2023-06-29T18:58:01Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - A brief history of AI: how to prevent another winter (a critical review) [0.6299766708197883]
We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present.
In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'
arXiv Detail & Related papers (2021-09-03T13:41:46Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.