LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking
- URL: http://arxiv.org/abs/2106.09795v1
- Date: Thu, 17 Jun 2021 20:22:45 GMT
- Title: LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking
- Authors: Hang Jiang, Sairam Gurajada, Qiuhao Lu, Sumit Neelam, Lucian Popa,
Prithviraj Sen, Yunyao Li, Alexander Gray
- Abstract summary: We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
- Score: 62.634516517844496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity linking (EL), the task of disambiguating mentions in text by linking
them to entities in a knowledge graph, is crucial for text understanding,
question answering or conversational systems. Entity linking on short text
(e.g., single sentence or question) poses particular challenges due to limited
context. While prior approaches use either heuristics or black-box neural
methods, here we propose LNN-EL, a neuro-symbolic approach that combines the
advantages of using interpretable rules based on first-order logic with the
performance of neural learning. Even though constrained to using rules, LNN-EL
performs competitively against SotA black-box neural approaches, with the added
benefits of extensibility and transferability. In particular, we show that we
can easily blend existing rule templates given by a human expert, with multiple
types of features (priors, BERT encodings, box embeddings, etc), and even
scores resulting from previous EL methods, thus improving on such methods. For
instance, on the LC-QuAD-1.0 dataset, we show more than $4$\% increase in F1
score over previous SotA. Finally, we show that the inductive bias offered by
using logic results in learned rules that transfer well across datasets, even
without fine tuning, while maintaining high accuracy.
Related papers
- Enhanced Expressivity in Graph Neural Networks with Lanczos-Based Linear Constraints [7.605749412696919]
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks.
We present a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis.
Our method achieves 20x and 10x speedup by only requiring 5% and 10% data from the PubMed and OGBL-Vessel datasets.
arXiv Detail & Related papers (2024-08-22T12:22:00Z) - Neural Symbolic Logical Rule Learner for Interpretable Learning [1.9526476410335776]
Rule-based neural networks stand out for enabling interpretable classification by learning logical rules for both prediction and interpretation.
We introduce the Normal Form Rule Learner (NFRL) algorithm, leveraging a selective discrete neural network, to learn rules in both Conjunctive Normal Form (CNF) and Disjunctive Normal Form (DNF)
Through extensive experiments on 11 datasets, NFRL demonstrates superior classification performance, quality of learned rules, efficiency and interpretability compared to 12 state-of-the-art alternatives.
arXiv Detail & Related papers (2024-08-21T18:09:12Z) - ELCoRec: Enhance Language Understanding with Co-Propagation of Numerical and Categorical Features for Recommendation [38.64175351885443]
Large language models have been flourishing in the natural language processing (NLP) domain.
Despite the intelligence shown by the recommendation-oriented finetuned models, LLMs struggle to fully understand the user behavior patterns.
Existing works only fine-tune a sole LLM on given text data without introducing that important information to it.
arXiv Detail & Related papers (2024-06-27T01:37:57Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem? [66.70154236519186]
Sentence insertion is a delicate but fundamental NLP problem.
Current approaches in sentence ordering, text coherence, and question answering (QA) are neither suitable nor good at solving it.
We propose InsertGNN, a model that represents the problem as a graph and adopts the graph Neural Network (GNN) to learn the connection between sentences.
arXiv Detail & Related papers (2021-03-28T06:50:31Z) - Exploring Classic and Neural Lexical Translation Models for Information
Retrieval: Interpretability, Effectiveness, and Efficiency Benefits [0.11421942894219898]
We use the neural Model 1 as an aggregator layer applied to context-free or contextualized query/document embeddings.
We show that adding an interpretable neural Model 1 layer on top of BERT-based contextualized embeddings does not decrease accuracy and/or efficiency.
We produced best neural and non-neural runs on the MS MARCO document ranking leaderboard in late 2020.
arXiv Detail & Related papers (2021-02-12T23:21:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.