Neural Production Systems
- URL: http://arxiv.org/abs/2103.01937v1
- Date: Tue, 2 Mar 2021 18:53:20 GMT
- Title: Neural Production Systems
- Authors: Anirudh Goyal, Aniket Didolkar, Nan Rosemary Ke, Charles Blundell,
Philippe Beaudoin, Nicolas Heess, Michael Mozer, Yoshua Bengio
- Abstract summary: Visual environments are structured, consisting of distinct objects or entities.
To partition images into entities, deep-learning researchers have proposed structural inductive biases.
We take inspiration from cognitive science and resurrect a classic approach, which consists of a set of rule templates.
This architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information.
- Score: 90.75211413357577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual environments are structured, consisting of distinct objects or
entities. These entities have properties -- both visible and latent -- that
determine the manner in which they interact with one another. To partition
images into entities, deep-learning researchers have proposed structural
inductive biases such as slot-based architectures. To model interactions among
entities, equivariant graph neural nets (GNNs) are used, but these are not
particularly well suited to the task for two reasons. First, GNNs do not
predispose interactions to be sparse, as relationships among independent
entities are likely to be. Second, GNNs do not factorize knowledge about
interactions in an entity-conditional manner. As an alternative, we take
inspiration from cognitive science and resurrect a classic approach, production
systems, which consist of a set of rule templates that are applied by binding
placeholder variables in the rules to specific entities. Rules are scored on
their match to entities, and the best fitting rules are applied to update
entity properties. In a series of experiments, we demonstrate that this
architecture achieves a flexible, dynamic flow of control and serves to
factorize entity-specific and rule-based information. This disentangling of
knowledge achieves robust future-state prediction in rich visual environments,
outperforming state-of-the-art methods using GNNs, and allows for the
extrapolation from simple (few object) environments to more complex
environments.
Related papers
- Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment [31.70064035432789]
We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
arXiv Detail & Related papers (2024-01-30T12:41:04Z) - Neural Constraint Satisfaction: Hierarchical Abstraction for
Combinatorial Generalization in Object Rearrangement [75.9289887536165]
We present a hierarchical abstraction approach to uncover underlying entities.
We show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment.
We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects.
arXiv Detail & Related papers (2023-03-20T18:19:36Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Dynamic Relation Discovery and Utilization in Multi-Entity Time Series
Forecasting [92.32415130188046]
In many real-world scenarios, there could exist crucial yet implicit relation between entities.
We propose an attentional multi-graph neural network with automatic graph learning (A2GNN) in this work.
arXiv Detail & Related papers (2022-02-18T11:37:04Z) - Improving Entity Linking through Semantic Reinforced Entity Embeddings [16.868791358905916]
We propose a method to inject fine-grained semantic information into entity embeddings to reduce the distinctiveness and facilitate the learning of contextual commonality.
Based on our entity embeddings, we achieved new sate-of-the-art performance on entity linking.
arXiv Detail & Related papers (2021-06-16T00:27:56Z) - Better Feature Integration for Named Entity Recognition [30.676768644145]
We propose a simple and robust solution to incorporate both types of features with our Synergized-LSTM (Syn-LSTM)
The results demonstrate that the proposed model achieves better performance than previous approaches while requiring fewer parameters.
arXiv Detail & Related papers (2021-04-12T09:55:06Z) - Relational Reflection Entity Alignment [28.42319743737994]
Entity alignment identifies entity pairs from Knowledge Graphs (KGs)
With the introduction of GNNs into entity alignment, the architectures of recent models have become more and more complicated.
In this paper, we abstract existing entity alignment methods into a unified framework, Shape-Builder & Alignment.
arXiv Detail & Related papers (2020-08-18T14:49:31Z) - Bipartite Flat-Graph Network for Nested Named Entity Recognition [94.91507634620133]
Bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER)
We propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER)
arXiv Detail & Related papers (2020-05-01T15:14:22Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z) - A Trio Neural Model for Dynamic Entity Relatedness Ranking [1.4810568221629932]
We propose a neural networkbased approach for dynamic entity relatedness.
Our model is capable of learning rich and different entity representations in a joint framework.
arXiv Detail & Related papers (2018-08-24T21:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.