Better Set Representations For Relational Reasoning
- URL: http://arxiv.org/abs/2003.04448v2
- Date: Wed, 17 Jun 2020 06:40:23 GMT
- Title: Better Set Representations For Relational Reasoning
- Authors: Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser-Nam Lim, Austin
Benson
- Abstract summary: relational reasoning operates on a set of entities, as opposed to standard vector representations.
We propose a simple and general network module called a Set Refiner Network (SRN)
- Score: 30.398348643632445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating relational reasoning into neural networks has greatly expanded
their capabilities and scope. One defining trait of relational reasoning is
that it operates on a set of entities, as opposed to standard vector
representations. Existing end-to-end approaches typically extract entities from
inputs by directly interpreting the latent feature representations as a set. We
show that these approaches do not respect set permutational invariance and thus
have fundamental representational limitations. To resolve this limitation, we
propose a simple and general network module called a Set Refiner Network (SRN).
We first use synthetic image experiments to demonstrate how our approach
effectively decomposes objects without explicit supervision. Then, we insert
our module into existing relational reasoning models and show that respecting
set invariance leads to substantial gains in prediction performance and
robustness on several relational reasoning tasks.
Related papers
- Enhancing Neural Subset Selection: Integrating Background Information into Set Representations [53.15923939406772]
We show that when the target value is conditioned on both the input set and subset, it is essential to incorporate an textitinvariant sufficient statistic of the superset into the subset of interest.
This ensures that the output value remains invariant to permutations of the subset and its corresponding superset, enabling identification of the specific superset from which the subset originated.
arXiv Detail & Related papers (2024-02-05T16:09:35Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Neural Constraint Satisfaction: Hierarchical Abstraction for
Combinatorial Generalization in Object Rearrangement [75.9289887536165]
We present a hierarchical abstraction approach to uncover underlying entities.
We show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment.
We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects.
arXiv Detail & Related papers (2023-03-20T18:19:36Z) - Stochastic Deep Networks with Linear Competing Units for Model-Agnostic
Meta-Learning [4.97235247328373]
This work addresses meta-learning (ML) by considering deep networks with local winner-takes-all (LWTA) activations.
This type of network units results in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output.
Our approach produces state-of-the-art predictive accuracy on few-shot image classification and regression experiments, as well as reduced predictive error on an active learning setting.
arXiv Detail & Related papers (2022-08-02T16:19:54Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Obtaining Faithful Interpretations from Compositional Neural Networks [72.41100663462191]
We evaluate the intermediate outputs of NMNs on NLVR2 and DROP datasets.
We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour.
arXiv Detail & Related papers (2020-05-02T06:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.