Efficient One-Pass End-to-End Entity Linking for Questions
- URL: http://arxiv.org/abs/2010.02413v1
- Date: Tue, 6 Oct 2020 01:14:10 GMT
- Title: Efficient One-Pass End-to-End Entity Linking for Questions
- Authors: Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad and Wen-tau
Yih
- Abstract summary: We present ELQ, a fast end-to-end entity linking model for questions.
Uses a biencoder to jointly perform mention detection and linking in one pass.
With a very fast inference time (1.57 examples/s on a single CPU), ELQ can be useful for downstream question answering systems.
- Score: 48.776127715663826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ELQ, a fast end-to-end entity linking model for questions, which
uses a biencoder to jointly perform mention detection and linking in one pass.
Evaluated on WebQSP and GraphQuestions with extended annotations that cover
multiple entities per question, ELQ outperforms the previous state of the art
by a large margin of +12.7% and +19.6% F1, respectively. With a very fast
inference time (1.57 examples/s on a single CPU), ELQ can be useful for
downstream question answering systems. In a proof-of-concept experiment, we
demonstrate that using ELQ significantly improves the downstream QA performance
of GraphRetriever (arXiv:1911.03868). Code and data available at
https://github.com/facebookresearch/BLINK/tree/master/elq
Related papers
- The benefits of query-based KGQA systems for complex and temporal questions in LLM era [55.20230501807337]
Large language models excel in question-answering (QA) yet still struggle with multi-hop reasoning and temporal questions.<n> Query-based knowledge graph QA (KGQA) offers a modular alternative by generating executable queries instead of direct answers.<n>We explore multi-stage query-based framework for WikiData QA, proposing multi-stage approach that enhances performance on challenging multi-hop and temporal benchmarks.
arXiv Detail & Related papers (2025-07-16T06:41:03Z) - Can LLMs Evaluate Complex Attribution in QA? Automatic Benchmarking using Knowledge Graphs [33.87001216244801]
Attributed Question Answering (AQA) has attracted wide attention, but there are still several limitations in evaluating the attributions.<n>We introduce Complex Attributed Question Answering (CAQA), a large-scale benchmark containing comprehensive attribution categories.<n>We have conducted extensive experiments to verify the effectiveness of CAQA.
arXiv Detail & Related papers (2024-01-26T04:11:07Z) - ANetQA: A Large-scale Benchmark for Fine-grained Compositional Reasoning
over Untrimmed Videos [120.80589215132322]
We present ANetQA, a large-scale benchmark that supports fine-grained compositional reasoning over challenging untrimmed videos from ActivityNet.
ANetQA attains 1.4 billion unbalanced and 13.4 million balanced QA pairs, which is an order of magnitude larger than AGQA with a similar number of videos.
The best model achieves 44.5% accuracy while human performance tops out at 84.5%, leaving sufficient room for improvement.
arXiv Detail & Related papers (2023-05-04T03:04:59Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z) - NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering
Dataset [26.782937852417454]
We introduce NOAHQA, a bilingual QA dataset with questions requiring numerical reasoning with compound mathematical expressions.
We evaluate the state-of-the-art QA models trained using existing QA datasets on NOAHQA and show that the best among them can only achieve 55.5 exact match scores.
We also present a new QA model for generating a reasoning graph where the reasoning graph metric still has a large gap compared with that of humans.
arXiv Detail & Related papers (2021-09-22T09:17:09Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - Efficient Contextualization using Top-k Operators for Question Answering
over Knowledge Graphs [24.520002698010856]
This work presents ECQA, an efficient method that prunes irrelevant parts of the search space using KB-aware signals.
Experiments with two recent QA benchmarks demonstrate the superiority of ECQA over state-of-the-art baselines with respect to answer presence, size of the search space, and runtimes.
arXiv Detail & Related papers (2021-08-19T10:06:14Z) - NExT-QA:Next Phase of Question-Answering to Explaining Temporal Actions [80.60423934589515]
We introduce NExT-QA, a rigorously designed video question answering (VideoQA) benchmark.
We set up multi-choice and open-ended QA tasks targeting causal action reasoning, temporal action reasoning, and common scene comprehension.
We find that top-performing methods excel at shallow scene descriptions but are weak in causal and temporal action reasoning.
arXiv Detail & Related papers (2021-05-18T04:56:46Z) - Fluent Response Generation for Conversational Question Answering [15.826109118064716]
We propose a method for situating responses within a SEQ2SEQ NLG approach to generate fluent grammatical answer responses.
We use data augmentation to generate training data for an end-to-end system.
arXiv Detail & Related papers (2020-05-21T04:57:01Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.