Psychomatics -- A Multidisciplinary Framework for Understanding Artificial Minds
- URL: http://arxiv.org/abs/2407.16444v1
- Date: Tue, 23 Jul 2024 12:53:41 GMT
- Title: Psychomatics -- A Multidisciplinary Framework for Understanding Artificial Minds
- Authors: Giuseppe Riva, Fabrizia Mantovani, Brenda K. Wiederhold, Antonella Marchetti, Andrea Gaggioli,
- Abstract summary: This paper introduces Psychomatics, a framework bridging cognitive science, linguistics, and computer science.
It aims to better understand the high-level functioning of LLMs.
Psychomatics holds the potential to yield transformative insights into the nature of language, cognition, and intelligence.
- Score: 0.319565400223685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although LLMs and other artificial intelligence systems demonstrate cognitive skills similar to humans, like concept learning and language acquisition, the way they process information fundamentally differs from biological cognition. To better understand these differences this paper introduces Psychomatics, a multidisciplinary framework bridging cognitive science, linguistics, and computer science. It aims to better understand the high-level functioning of LLMs, focusing specifically on how LLMs acquire, learn, remember, and use information to produce their outputs. To achieve this goal, Psychomatics will rely on a comparative methodology, starting from a theory-driven research question - is the process of language development and use different in humans and LLMs? - drawing parallels between LLMs and biological systems. Our analysis shows how LLMs can map and manipulate complex linguistic patterns in their training data. Moreover, LLMs can follow Grice's Cooperative Principle to provide relevant and informative responses. However, human cognition draws from multiple sources of meaning, including experiential, emotional, and imaginative facets, which transcend mere language processing and are rooted in our social and developmental trajectories. Moreover, current LLMs lack physical embodiment, reducing their ability to make sense of the intricate interplay between perception, action, and cognition that shapes human understanding and expression. Ultimately, Psychomatics holds the potential to yield transformative insights into the nature of language, cognition, and intelligence, both artificial and biological. Moreover, by drawing parallels between LLMs and human cognitive processes, Psychomatics can inform the development of more robust and human-like AI systems.
Related papers
- Lost in Translation: The Algorithmic Gap Between LMs and the Brain [8.799971499357499]
Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear.
This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis.
We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models.
arXiv Detail & Related papers (2024-07-05T17:43:16Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Mind's Eye of LLMs: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models [71.93366651585275]
We propose visualization-of-Thought (VoT) prompting for large language models (LLMs)
VoT elicits spatial reasoning of LLMs by visualizing their reasoning traces, thereby guiding subsequent reasoning steps.
We employ VoT for multi-hop spatial reasoning tasks, including natural language navigation, visual navigation, and visual tiling in 2D grid worlds.
arXiv Detail & Related papers (2024-04-04T17:45:08Z) - Do Large Language Models Mirror Cognitive Language Processing? [43.68923267228057]
Large Language Models (LLMs) have demonstrated remarkable abilities in text comprehension and logical reasoning.
In cognitive science, brain cognitive processing signals are typically utilized to study human language processing.
We employ Representational Similarity Analysis (RSA) to measure the alignment between 23 mainstream LLMs and fMRI signals of the brain.
arXiv Detail & Related papers (2024-02-28T03:38:20Z) - Instruction-tuning Aligns LLMs to the Human Brain [20.86703074354748]
Instruction-tuning enables large language models to generate output that more closely resembles human responses to natural language queries.
We investigate whether instruction-tuning makes large language models more similar to how humans process language.
We find that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment.
arXiv Detail & Related papers (2023-12-01T13:31:02Z) - DeepThought: An Architecture for Autonomous Self-motivated Systems [1.6385815610837167]
We argue that the internal architecture of large language models (LLMs) cannot support intrinsic motivations, agency, or some degree of consciousness.
We propose to integrate LLMs into an architecture for cognitive language agents able to exhibit properties akin to agency, self-motivation, even some features of meta-cognition.
arXiv Detail & Related papers (2023-11-14T21:20:23Z) - Towards Concept-Aware Large Language Models [56.48016300758356]
Concepts play a pivotal role in various human cognitive functions, including learning, reasoning and communication.
There is very little work on endowing machines with the ability to form and reason with concepts.
In this work, we analyze how well contemporary large language models (LLMs) capture human concepts and their structure.
arXiv Detail & Related papers (2023-11-03T12:19:22Z) - LLM as A Robotic Brain: Unifying Egocentric Memory and Control [77.0899374628474]
Embodied AI focuses on the study and development of intelligent systems that possess a physical or virtual embodiment (i.e. robots)
Memory and control are the two essential parts of an embodied system and usually require separate frameworks to model each of them.
We propose a novel framework called LLM-Brain: using Large-scale Language Model as a robotic brain to unify egocentric memory and control.
arXiv Detail & Related papers (2023-04-19T00:08:48Z) - Machine Psychology: Investigating Emergent Capabilities and Behavior in Large Language Models Using Psychological Methods [0.0]
Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life.
This paper introduces a new field of research called "machine psychology"
It defines methodological standards for machine psychology research, especially by focusing on policies for prompt designs.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.