Generalizable Neuro-symbolic Systems for Commonsense Question Answering
- URL: http://arxiv.org/abs/2201.06230v1
- Date: Mon, 17 Jan 2022 06:13:37 GMT
- Title: Generalizable Neuro-symbolic Systems for Commonsense Question Answering
- Authors: Alessandro Oltramari, Jonathan Francis, Filip Ilievski, Kaixin Ma,
Roshanak Mirzaee
- Abstract summary: This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks.
Different methods for integrating neural language models and knowledge graphs are discussed.
- Score: 67.72218865519493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This chapter illustrates how suitable neuro-symbolic models for language
understanding can enable domain generalizability and robustness in downstream
tasks. Different methods for integrating neural language models and knowledge
graphs are discussed. The situations in which this combination is most
appropriate are characterized, including quantitative evaluation and
qualitative error analysis on a variety of commonsense question answering
benchmark datasets.
Related papers
- CoSy: Evaluating Textual Explanations of Neurons [5.696573924249008]
A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within latent representations.
We introduce CoSy -- a novel framework to evaluate the quality of textual explanations for latent neurons.
arXiv Detail & Related papers (2024-05-30T17:59:04Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifold.
We propose a theoretical framework that overcomes this limitation by leveraging contextual input information.
Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Towards Generating Informative Textual Description for Neurons in
Language Models [6.884227665279812]
We propose a framework that ties textual descriptions to neurons.
In particular, our experiment shows that the proposed approach achieves 75% precision@2, and 50% recall@2
arXiv Detail & Related papers (2024-01-30T04:06:25Z) - Towards a fuller understanding of neurons with Clustered Compositional
Explanations [8.440673378588489]
We propose a generalization, called Clustered Compositional Explanations, that combines Compositional Explanations with clustering and a novel search to approximate a broader spectrum of the neurons' behavior.
We define and address the problems connected to the application of these methods to multiple ranges of activations, analyze the insights retrievable by using our algorithm, and propose desiderata qualities that can be used to study the explanations returned by different algorithms.
arXiv Detail & Related papers (2023-10-27T19:39:50Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Benchmarking Compositionality with Formal Languages [64.09083307778951]
We investigate whether large neural models in NLP can acquire the ability tocombining primitive concepts into larger novel combinations while learning from data.
By randomly sampling over many transducers, we explore which of their properties contribute to learnability of a compositional relation by a neural network.
We find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition.
arXiv Detail & Related papers (2022-08-17T10:03:18Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.