FAME: Flexible, Scalable Analogy Mappings Engine
- URL: http://arxiv.org/abs/2311.01860v1
- Date: Fri, 3 Nov 2023 12:08:02 GMT
- Title: FAME: Flexible, Scalable Analogy Mappings Engine
- Authors: Shahar Jacob, Chen Shani, Dafna Shahaf
- Abstract summary: In this work, we relax the input requirements, requiring only names of entities to be mapped.
We automatically extract commonsense representations and use them to identify a mapping between the entities.
Our framework can handle partial analogies and suggest new entities to be added.
- Score: 22.464249291871937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analogy is one of the core capacities of human cognition; when faced with new
situations, we often transfer prior experience from other domains. Most work on
computational analogy relies heavily on complex, manually crafted input. In
this work, we relax the input requirements, requiring only names of entities to
be mapped. We automatically extract commonsense representations and use them to
identify a mapping between the entities. Unlike previous works, our framework
can handle partial analogies and suggest new entities to be added. Moreover,
our method's output is easily interpretable, allowing for users to understand
why a specific mapping was chosen.
Experiments show that our model correctly maps 81.2% of classical 2x2 analogy
problems (guess level=50%). On larger problems, it achieves 77.8% accuracy
(mean guess level=13.1%). In another experiment, we show our algorithm
outperforms human performance, and the automatic suggestions of new entities
resemble those suggested by humans. We hope this work will advance
computational analogy by paving the way to more flexible, realistic input
requirements, with broader applicability.
Related papers
- ParallelPARC: A Scalable Pipeline for Generating Natural-Language Analogies [16.92480305308536]
We develop a pipeline for creating complex, paragraph-based analogies.
We publish a gold-set, validated by humans, and a silver-set, generated automatically.
We demonstrate that our silver-set is useful for training models.
arXiv Detail & Related papers (2024-03-02T08:53:40Z) - AnaloBench: Benchmarking the Identification of Abstract and Long-context Analogies [19.613777134600408]
Analogical thinking allows humans to solve problems in creative ways.
Can language models (LMs) do the same?
benchmarking approach focuses on aspects of this ability that are common among humans.
arXiv Detail & Related papers (2024-02-19T18:56:44Z) - StoryAnalogy: Deriving Story-level Analogies from Large Language Models
to Unlock Analogical Understanding [72.38872974837462]
We evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus.
textscStory Analogy contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory.
We observe that the data in textscStory Analogy can improve the quality of analogy generation in large language models.
arXiv Detail & Related papers (2023-10-19T16:29:23Z) - VASR: Visual Analogies of Situation Recognition [21.114629154550364]
We introduce a novel task, Visual Analogies of Situation Recognition.
We tackle complex analogies requiring understanding of scenes.
Crowdsourced annotations for a sample of the data indicate that humans agree with the dataset label 80% of the time.
Our experiments demonstrate that state-of-the-art models do well when distractors are chosen randomly, but struggle with carefully chosen distractors.
arXiv Detail & Related papers (2022-12-08T20:08:49Z) - Life is a Circus and We are the Clowns: Automatically Finding Analogies
between Situations and Processes [12.8252101640812]
Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains.
Despite their importance, analogies received little attention in the NLP community.
arXiv Detail & Related papers (2022-10-21T18:54:17Z) - Does entity abstraction help generative Transformers reason? [8.159805544989359]
We study the utility of incorporating entity type abstractions into pre-trained Transformers.
We test these methods on four NLP tasks requiring different forms of logical reasoning.
arXiv Detail & Related papers (2022-01-05T19:00:53Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Learning What To Do by Simulating the Past [76.86449554580291]
We show that by combining a learned feature encoder with learned inverse models, we can enable agents to simulate human actions backwards in time to infer what they must have done.
The resulting algorithm is able to reproduce a specific skill in MuJoCo environments given a single state sampled from the optimal policy for that skill.
arXiv Detail & Related papers (2021-04-08T17:43:29Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - A Simple Approach to Case-Based Reasoning in Knowledge Bases [56.661396189466664]
We present a surprisingly simple yet accurate approach to reasoning in knowledge graphs (KGs) that requires emphno training, and is reminiscent of case-based reasoning in classical artificial intelligence (AI)
Consider the task of finding a target entity given a source entity and a binary relation.
Our non-parametric approach derives crisp logical rules for each query by finding multiple textitgraph path patterns that connect similar source entities through the given relation.
arXiv Detail & Related papers (2020-06-25T06:28:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.