Waking Up an AI: A Quantitative Framework for Prompt-Induced Phase Transition in Large Language Models
- URL: http://arxiv.org/abs/2504.21012v2
- Date: Thu, 01 May 2025 14:58:32 GMT
- Title: Waking Up an AI: A Quantitative Framework for Prompt-Induced Phase Transition in Large Language Models
- Authors: Makoto Sato,
- Abstract summary: We propose a two-part framework to investigate what underlies intuitive human thinking.<n>A form of conceptual fusion-current LLMs showed no significant difference in responsiveness between semantically fused and non-fused prompts.<n>Our method may help illuminate key differences in how intuition and conceptual leaps emerge in artificial versus human minds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What underlies intuitive human thinking? One approach to this question is to compare the cognitive dynamics of humans and large language models (LLMs). However, such a comparison requires a method to quantitatively analyze AI cognitive behavior under controlled conditions. While anecdotal observations suggest that certain prompts can dramatically change LLM behavior, these observations have remained largely qualitative. Here, we propose a two-part framework to investigate this phenomenon: a Transition-Inducing Prompt (TIP) that triggers a rapid shift in LLM responsiveness, and a Transition Quantifying Prompt (TQP) that evaluates this change using a separate LLM. Through controlled experiments, we examined how LLMs react to prompts embedding two semantically distant concepts (e.g., mathematical aperiodicity and traditional crafts)-either fused together or presented separately-by changing their linguistic quality and affective tone. Whereas humans tend to experience heightened engagement when such concepts are meaningfully blended producing a novel concept-a form of conceptual fusion-current LLMs showed no significant difference in responsiveness between semantically fused and non-fused prompts. This suggests that LLMs may not yet replicate the conceptual integration processes seen in human intuition. Our method enables fine-grained, reproducible measurement of cognitive responsiveness, and may help illuminate key differences in how intuition and conceptual leaps emerge in artificial versus human minds.
Related papers
- How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition [75.11808682808065]
This study investigates whether large language models (LLMs) exhibit similar tendencies in understanding semantic size.<n>Our findings reveal that multi-modal training is crucial for LLMs to achieve more human-like understanding.<n> Lastly, we examine whether LLMs are influenced by attention-grabbing headlines with larger semantic sizes in a real-world web shopping scenario.
arXiv Detail & Related papers (2025-03-01T03:35:56Z) - Analyze the Neurons, not the Embeddings: Understanding When and Where LLM Representations Align with Humans [3.431979707540646]
This work introduces a novel approach to the study of representation alignment.<n>We adopt a method from research on activation steering to identify neurons responsible for specific concepts.<n>Our findings reveal that LLM representations closely align with human representations inferred from behavioral data.
arXiv Detail & Related papers (2025-02-20T23:08:03Z) - Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking [124.69672273754144]
HaluSearch is a novel framework that incorporates tree search-based algorithms.<n>It frames text generation as a step-by-step reasoning process.<n>We introduce a hierarchical thinking system switch mechanism inspired by the dual process theory in cognitive science.
arXiv Detail & Related papers (2025-01-02T15:36:50Z) - Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia [27.650551131885152]
Research into large language models (LLMs) has shown promise in addressing complex tasks in the physical world.
Studies suggest that powerful LLMs, like GPT-4, are beginning to exhibit human-like cognitive abilities.
arXiv Detail & Related papers (2024-10-02T15:47:25Z) - CogniDual Framework: Self-Training Large Language Models within a Dual-System Theoretical Framework for Improving Cognitive Tasks [39.43278448546028]
Kahneman's dual-system theory elucidates the human decision-making process, distinguishing between the rapid, intuitive System 1 and the deliberative, rational System 2.
Recent advancements have positioned large language Models (LLMs) as formidable tools nearing human-level proficiency in various cognitive tasks.
This study introduces the textbfCogniDual Framework for LLMs (CFLLMs), designed to assess whether LLMs can, through self-training, evolve from deliberate deduction to intuitive responses.
arXiv Detail & Related papers (2024-09-05T09:33:24Z) - What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models [50.97705264224828]
We propose Counterfactual Inception, a novel method that implants counterfactual thinking into Large Multi-modal Models.
We aim for the models to engage with and generate responses that span a wider contextual scene understanding.
Comprehensive analyses across various LMMs, including both open-source and proprietary models, corroborate that counterfactual thinking significantly reduces hallucination.
arXiv Detail & Related papers (2024-03-20T11:27:20Z) - Uncertainty Quantification for In-Context Learning of Large Language Models [52.891205009620364]
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs)
We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties.
The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion.
arXiv Detail & Related papers (2024-02-15T18:46:24Z) - Think Twice: Perspective-Taking Improves Large Language Models'
Theory-of-Mind Capabilities [63.90227161974381]
SimToM is a novel prompting framework inspired by Simulation Theory's notion of perspective-taking.
Our approach, which requires no additional training and minimal prompt-tuning, shows substantial improvement over existing methods.
arXiv Detail & Related papers (2023-11-16T22:49:27Z) - AI Text-to-Behavior: A Study In Steerability [0.0]
The research explores the steerability of Large Language Models (LLMs)
We quantitatively gauged the model's responsiveness to tailored prompts using a behavioral psychology framework called OCEAN.
Our findings underscore GPT's versatility and ability to discern and adapt to nuanced instructions.
arXiv Detail & Related papers (2023-08-07T18:14:24Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.