Beneath Surface Similarity: Large Language Models Make Reasonable
Scientific Analogies after Structure Abduction
- URL: http://arxiv.org/abs/2305.12660v2
- Date: Tue, 10 Oct 2023 11:36:08 GMT
- Title: Beneath Surface Similarity: Large Language Models Make Reasonable
Scientific Analogies after Structure Abduction
- Authors: Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, Deqing Yang
- Abstract summary: The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
This work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies.
This paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems.
- Score: 46.2032673640788
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The vital role of analogical reasoning in human cognition allows us to grasp
novel concepts by linking them with familiar ones through shared relational
structures. Despite the attention previous research has given to word
analogies, this work suggests that Large Language Models (LLMs) often overlook
the structures that underpin these analogies, raising questions about the
efficacy of word analogies as a measure of analogical reasoning skills akin to
human cognition. In response to this, our paper introduces a task of analogical
structure abduction, grounded in cognitive psychology, designed to abduce
structures that form an analogy between two systems. In support of this task,
we establish a benchmark called SCAR, containing 400 scientific analogies from
13 distinct fields, tailored for evaluating analogical reasoning with structure
abduction. The empirical evidence underlines the continued challenges faced by
LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need
for future exploration to enhance their abilities.
Related papers
- StoryAnalogy: Deriving Story-level Analogies from Large Language Models
to Unlock Analogical Understanding [72.38872974837462]
We evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus.
textscStory Analogy contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory.
We observe that the data in textscStory Analogy can improve the quality of analogy generation in large language models.
arXiv Detail & Related papers (2023-10-19T16:29:23Z) - ARN: Analogical Reasoning on Narratives [13.707344123755126]
We develop a framework that operationalizes dominant theories of analogy, using narrative elements to create surface and system mappings.
We show that while all LLMs can largely recognize near analogies, even the largest ones struggle with far analogies in a zero-shot setting.
arXiv Detail & Related papers (2023-10-02T08:58:29Z) - Why Do We Need Neuro-symbolic AI to Model Pragmatic Analogies? [6.8107181513711055]
A hallmark of intelligence is the ability to use a familiar domain to make inferences about a less familiar domain, known as analogical reasoning.
We discuss analogies at four distinct levels of complexity: lexical analogies, syntactic analogies, semantic analogies, and pragmatic analogies.
We employ Neuro-symbolic AI techniques that combine statistical and symbolic AI, informing the representation of unstructured text to highlight and augment relevant content, provide abstraction and guide the mapping process.
arXiv Detail & Related papers (2023-08-02T21:13:38Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base [51.777618249271725]
ANALOGYKB is a million-scale analogy knowledge base derived from existing knowledge graphs (KGs)
It identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs)
arXiv Detail & Related papers (2023-05-10T09:03:01Z) - Understanding Narratives through Dimensions of Analogy [17.68704739786042]
Analogical reasoning is a powerful tool that enables humans to connect two situations, and to generalize their knowledge from familiar to novel situations.
Modern scalable AI techniques with the potential to reason by analogy have been only applied to the special case of proportional analogy.
In this paper, we aim to bridge the gap by: 1) formalizing six dimensions of analogy based on mature insights from Cognitive Science research, 2) annotating a corpus of fables with each of these dimensions, and 3) defining four tasks with increasing complexity that enable scalable evaluation of AI techniques.
arXiv Detail & Related papers (2022-06-14T20:56:26Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - Neural Analogical Matching [8.716086137563243]
The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence.
We introduce the Analogical Matching Network, a neural architecture that learns to produce analogies between structured, symbolic representations.
arXiv Detail & Related papers (2020-04-07T17:50:52Z) - Learning to See Analogies: A Connectionist Exploration [0.0]
This dissertation explores the integration of learning and analogy-making through the development of a computer program, called Analogator.
By "seeing" many different analogy problems, along with possible solutions, Analogator gradually develops an ability to make new analogies.
arXiv Detail & Related papers (2020-01-18T14:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.