Unveiling LLMs' Metaphorical Understanding: Exploring Conceptual Irrelevance, Context Leveraging and Syntactic Influence
- URL: http://arxiv.org/abs/2510.04120v1
- Date: Sun, 05 Oct 2025 09:45:51 GMT
- Title: Unveiling LLMs' Metaphorical Understanding: Exploring Conceptual Irrelevance, Context Leveraging and Syntactic Influence
- Authors: Fengying Ye, Shanshan Wang, Lidia S. Chao, Derek F. Wong,
- Abstract summary: Large Language Models (LLMs) demonstrate advanced capabilities in knowledge integration, contextual reasoning, and creative generation.<n>This study examines LLMs' metaphor-processing abilities from three perspectives.
- Score: 40.32545329527664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Metaphor analysis is a complex linguistic phenomenon shaped by context and external factors. While Large Language Models (LLMs) demonstrate advanced capabilities in knowledge integration, contextual reasoning, and creative generation, their mechanisms for metaphor comprehension remain insufficiently explored. This study examines LLMs' metaphor-processing abilities from three perspectives: (1) Concept Mapping: using embedding space projections to evaluate how LLMs map concepts in target domains (e.g., misinterpreting "fall in love" as "drop down from love"); (2) Metaphor-Literal Repository: analyzing metaphorical words and their literal counterparts to identify inherent metaphorical knowledge; and (3) Syntactic Sensitivity: assessing how metaphorical syntactic structures influence LLMs' performance. Our findings reveal that LLMs generate 15\%-25\% conceptually irrelevant interpretations, depend on metaphorical indicators in training data rather than contextual cues, and are more sensitive to syntactic irregularities than to structural comprehension. These insights underline the limitations of LLMs in metaphor analysis and call for more robust computational approaches.
Related papers
- Concept Component Analysis: A Principled Approach for Concept Extraction in LLMs [51.378834857406325]
Mechanistic interpretability seeks to mitigate the issues through extracts from large language models.<n>Sparse autoencoders (SAEs) have emerged as a popular approach for extracting interpretable and monosemantic concepts.<n>We show that SAEs suffer from a fundamental theoretical ambiguity: the well-defined correspondence between LLM representations and human-interpretable concepts remains unclear.
arXiv Detail & Related papers (2026-01-28T09:27:05Z) - Metaphor identification using large language models: A comparison of RAG, prompt engineering, and fine-tuning [0.6524460254566904]
This study investigates the potential of large language models (LLMs) to automate metaphor identification in full texts.<n>We compare three methods: (i) retrieval-augmented generation (RAG), where the model is provided with a codebook and instructed to annotate texts based on its rules and examples; (ii) prompt engineering, where we design task-specific verbal instructions; and (iii) fine-tuning, where the model is trained on hand-coded texts to optimize performance.
arXiv Detail & Related papers (2025-09-29T14:50:18Z) - Understanding Textual Capability Degradation in Speech LLMs via Parameter Importance Analysis [54.53152524778821]
integration of speech into Large Language Models (LLMs) has substantially expanded their capabilities, but often at the cost of weakening their core textual competence.<n>We propose an analytical framework based on parameter importance estimation, which reveals that fine-tuning for speech introduces a textual importance distribution shift.<n>We investigate two mitigation strategies: layer-wise learning rate scheduling and Low-Rank Adaptation (LoRA)<n> Experimental results show that both approaches better maintain textual competence than full fine-tuning, while also improving downstream spoken question answering performance.
arXiv Detail & Related papers (2025-09-28T09:04:40Z) - Meanings are like Onions: a Layered Approach to Metaphor Processing [0.0]
We propose a model of metaphor processing that treats meaning as an onion.<n>At the first level, metaphors are annotated through basic conceptual elements.<n>At the second level, we model conceptual combinations, linking components to emergent meanings.<n>At the third level, we introduce a pragmatic vocabulary to capture speaker intent, communicative function, and contextual effects.
arXiv Detail & Related papers (2025-07-14T14:56:46Z) - From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning [63.25540801694765]
Large Language Models (LLMs) demonstrate striking linguistic abilities, yet whether they achieve this same balance remains unclear.<n>We apply the Information Bottleneck principle to quantitatively compare how LLMs and humans navigate this compression-meaning trade-off.
arXiv Detail & Related papers (2025-05-21T16:29:00Z) - Not Minds, but Signs: Reframing LLMs through Semiotics [0.0]
This paper argues for a semiotic perspective on Large Language Models (LLMs)<n>Rather than assuming that LLMs understand language or simulate human thought, we propose that their primary function is to recombine, recontextualize, and circulate linguistic forms.<n>We explore applications in literature, philosophy, education, and cultural production.
arXiv Detail & Related papers (2025-05-20T08:49:18Z) - The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of Physical Concept Understanding [65.28200190598082]
We propose a summative assessment over a carefully designed physical concept understanding task, PhysiCo.<n>Our task alleviates the issue via the usage of grid-format inputs that abstractly describe physical phenomena.<n>A comprehensive study on our task demonstrates: (1) state-of-the-art LLMs, including GP-4o, lag behind humans by 40%; (2) the parrot, o1 phenomenon is present in LLMs as they fail on our grid task but can describe and recognize the same concepts well in natural language.
arXiv Detail & Related papers (2025-02-13T04:00:03Z) - Towards Multimodal Metaphor Understanding: A Chinese Dataset and Model for Metaphor Mapping Identification [9.08615188602226]
We develop a Chinese multimodal metaphor advertisement dataset (namely CM3D) that includes annotations of specific target and source domains.<n>We propose a Chain-of-NLP (CoT) Prompting-based Metaphor Mapping Identification Model (CPMMIM) which simulates the human cognitive process for identifying these mappings.
arXiv Detail & Related papers (2025-01-05T04:15:03Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language
Models to Generalize to Novel Interpretations [37.13707912132472]
Humans possess a remarkable ability to assign novel interpretations to linguistic expressions.
Large Language Models (LLMs) have a knowledge cutoff and are costly to finetune repeatedly.
We systematically analyse the ability of LLMs to acquire novel interpretations using in-context learning.
arXiv Detail & Related papers (2023-10-18T00:02:38Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.