Learning to See Analogies: A Connectionist Exploration
- URL: http://arxiv.org/abs/2001.06668v1
- Date: Sat, 18 Jan 2020 14:06:16 GMT
- Title: Learning to See Analogies: A Connectionist Exploration
- Authors: Douglas S. Blank
- Abstract summary: This dissertation explores the integration of learning and analogy-making through the development of a computer program, called Analogator.
By "seeing" many different analogy problems, along with possible solutions, Analogator gradually develops an ability to make new analogies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This dissertation explores the integration of learning and analogy-making
through the development of a computer program, called Analogator, that learns
to make analogies by example. By "seeing" many different analogy problems,
along with possible solutions, Analogator gradually develops an ability to make
new analogies. That is, it learns to make analogies by analogy. This approach
stands in contrast to most existing research on analogy-making, in which
typically the a priori existence of analogical mechanisms within a model is
assumed. The present research extends standard connectionist methodologies by
developing a specialized associative training procedure for a recurrent network
architecture. The network is trained to divide input scenes (or situations)
into appropriate figure and ground components. Seeing one scene in terms of a
particular figure and ground provides the context for seeing another in an
analogous fashion. After training, the model is able to make new analogies
between novel situations. Analogator has much in common with lower-level
perceptual models of categorization and recognition; it thus serves as a
unifying framework encompassing both high-level analogical learning and
low-level perception. This approach is compared and contrasted with other
computational models of analogy-making. The model's training and generalization
performance is examined, and limitations are discussed.
Related papers
- Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures [49.24097977047392]
We investigate two mainstream architectures for language modeling, namely Transformers and Mambas, to explore the extent of their mechanistic similarity.
We propose to use Sparse Autoencoders (SAEs) to isolate interpretable features from these models and show that most features are similar in these two models.
arXiv Detail & Related papers (2024-10-09T08:28:53Z) - StoryAnalogy: Deriving Story-level Analogies from Large Language Models
to Unlock Analogical Understanding [72.38872974837462]
We evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus.
textscStory Analogy contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory.
We observe that the data in textscStory Analogy can improve the quality of analogy generation in large language models.
arXiv Detail & Related papers (2023-10-19T16:29:23Z) - ARN: Analogical Reasoning on Narratives [13.707344123755126]
We develop a framework that operationalizes dominant theories of analogy, using narrative elements to create surface and system mappings.
We show that while all LLMs can largely recognize near analogies, even the largest ones struggle with far analogies in a zero-shot setting.
arXiv Detail & Related papers (2023-10-02T08:58:29Z) - Why Do We Need Neuro-symbolic AI to Model Pragmatic Analogies? [6.8107181513711055]
A hallmark of intelligence is the ability to use a familiar domain to make inferences about a less familiar domain, known as analogical reasoning.
We discuss analogies at four distinct levels of complexity: lexical analogies, syntactic analogies, semantic analogies, and pragmatic analogies.
We employ Neuro-symbolic AI techniques that combine statistical and symbolic AI, informing the representation of unstructured text to highlight and augment relevant content, provide abstraction and guide the mapping process.
arXiv Detail & Related papers (2023-08-02T21:13:38Z) - Beneath Surface Similarity: Large Language Models Make Reasonable
Scientific Analogies after Structure Abduction [46.2032673640788]
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
This work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies.
This paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems.
arXiv Detail & Related papers (2023-05-22T03:04:06Z) - ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base [51.777618249271725]
ANALOGYKB is a million-scale analogy knowledge base derived from existing knowledge graphs (KGs)
It identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs)
arXiv Detail & Related papers (2023-05-10T09:03:01Z) - Scientific and Creative Analogies in Pretrained Language Models [24.86477727507679]
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2.
We introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains.
We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.
arXiv Detail & Related papers (2022-11-28T12:49:44Z) - Similarity of Neural Architectures using Adversarial Attack Transferability [47.66096554602005]
We design a quantitative and scalable similarity measure between neural architectures.
We conduct a large-scale analysis on 69 state-of-the-art ImageNet classifiers.
Our results provide insights into why developing diverse neural architectures with distinct components is necessary.
arXiv Detail & Related papers (2022-10-20T16:56:47Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Analogy as Nonparametric Bayesian Inference over Relational Systems [10.736626320566705]
We propose a Bayesian model that generalizes relational knowledge to novel environments by analogically weighting predictions from previously encountered relational structures.
We show that this learner outperforms a naive, theory-based learner on relational data derived from random- and Wikipedia-based systems when experience with the environment is small.
arXiv Detail & Related papers (2020-06-07T14:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.