Is analogy enough to draw novel adjective-noun inferences?
- URL: http://arxiv.org/abs/2503.24293v1
- Date: Mon, 31 Mar 2025 16:41:16 GMT
- Title: Is analogy enough to draw novel adjective-noun inferences?
- Authors: Hayley Ross, Kathryn Davidson, Najoung Kim,
- Abstract summary: We study whether inferences can instead be derived by analogy to known inferences, without need for composition.<n>We find that there are novel combinations for which both humans and LLMs derive convergent inferences but which are not well handled by analogy.
- Score: 9.30694340695458
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent work (Ross et al., 2025, 2024) has argued that the ability of humans and LLMs respectively to generalize to novel adjective-noun combinations shows that they each have access to a compositional mechanism to determine the phrase's meaning and derive inferences. We study whether these inferences can instead be derived by analogy to known inferences, without need for composition. We investigate this by (1) building a model of analogical reasoning using similarity over lexical items, and (2) asking human participants to reason by analogy. While we find that this strategy works well for a large proportion of the dataset of Ross et al. (2025), there are novel combinations for which both humans and LLMs derive convergent inferences but which are not well handled by analogy. We thus conclude that the mechanism humans and LLMs use to generalize in these cases cannot be fully reduced to analogy, and likely involves composition.
Related papers
- Types of Relations: Defining Analogies with Category Theory [0.0]
In this paper, we study features of a domain that are important for constructing analogies.<n>We do so by formalizing knowledge domains as categories.<n>We also show how functors, pullbacks, and pushouts can be used to define an analogy.
arXiv Detail & Related papers (2025-05-26T10:22:44Z) - Can Large Language Models generalize analogy solving like people can? [46.02074643846298]
In people, the ability to solve analogies such as "body : feet :: table :?" emerges in childhood.
Recent research shows that large language models (LLMs) can solve various forms of analogies.
arXiv Detail & Related papers (2024-11-04T18:18:38Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.<n>Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.<n>The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - AnaloBench: Benchmarking the Identification of Abstract and Long-context Analogies [19.613777134600408]
Analogical thinking allows humans to solve problems in creative ways.
Can language models (LMs) do the same?
benchmarking approach focuses on aspects of this ability that are common among humans.
arXiv Detail & Related papers (2024-02-19T18:56:44Z) - StoryAnalogy: Deriving Story-level Analogies from Large Language Models
to Unlock Analogical Understanding [72.38872974837462]
We evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus.
textscStory Analogy contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory.
We observe that the data in textscStory Analogy can improve the quality of analogy generation in large language models.
arXiv Detail & Related papers (2023-10-19T16:29:23Z) - ARN: Analogical Reasoning on Narratives [13.707344123755126]
We develop a framework that operationalizes dominant theories of analogy, using narrative elements to create surface and system mappings.
We show that while all LLMs can largely recognize near analogies, even the largest ones struggle with far analogies in a zero-shot setting.
arXiv Detail & Related papers (2023-10-02T08:58:29Z) - Why Do We Need Neuro-symbolic AI to Model Pragmatic Analogies? [6.8107181513711055]
A hallmark of intelligence is the ability to use a familiar domain to make inferences about a less familiar domain, known as analogical reasoning.
We discuss analogies at four distinct levels of complexity: lexical analogies, syntactic analogies, semantic analogies, and pragmatic analogies.
We employ Neuro-symbolic AI techniques that combine statistical and symbolic AI, informing the representation of unstructured text to highlight and augment relevant content, provide abstraction and guide the mapping process.
arXiv Detail & Related papers (2023-08-02T21:13:38Z) - Beneath Surface Similarity: Large Language Models Make Reasonable
Scientific Analogies after Structure Abduction [46.2032673640788]
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
This work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies.
This paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems.
arXiv Detail & Related papers (2023-05-22T03:04:06Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - Understanding Narratives through Dimensions of Analogy [17.68704739786042]
Analogical reasoning is a powerful tool that enables humans to connect two situations, and to generalize their knowledge from familiar to novel situations.
Modern scalable AI techniques with the potential to reason by analogy have been only applied to the special case of proportional analogy.
In this paper, we aim to bridge the gap by: 1) formalizing six dimensions of analogy based on mature insights from Cognitive Science research, 2) annotating a corpus of fables with each of these dimensions, and 3) defining four tasks with increasing complexity that enable scalable evaluation of AI techniques.
arXiv Detail & Related papers (2022-06-14T20:56:26Z) - A Description Logic for Analogical Reasoning [28.259681405091666]
We present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy.
This is the first paper that studies analog reasoning within the setting of description logic.
arXiv Detail & Related papers (2021-05-10T19:06:07Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.