AKEM: Aligning Knowledge Base to Queries with Ensemble Model for Entity
Recognition and Linking
- URL: http://arxiv.org/abs/2309.06175v2
- Date: Wed, 13 Sep 2023 03:53:43 GMT
- Title: AKEM: Aligning Knowledge Base to Queries with Ensemble Model for Entity
Recognition and Linking
- Authors: Di Lu and Zhongping Liang and Caixia Yuan and Xiaojie Wang
- Abstract summary: This paper presents a novel approach to address the Entity Recognition and Linking Challenge at NLPCC 2015.
The task involves extracting named entity mentions from short search queries and linking them to entities within a reference Chinese knowledge base.
Our method is computationally efficient and achieves an F1 score of 0.535.
- Score: 15.548722102706867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel approach to address the Entity Recognition and
Linking Challenge at NLPCC 2015. The task involves extracting named entity
mentions from short search queries and linking them to entities within a
reference Chinese knowledge base. To tackle this problem, we first expand the
existing knowledge base and utilize external knowledge to identify candidate
entities, thereby improving the recall rate. Next, we extract features from the
candidate entities and utilize Support Vector Regression and Multiple Additive
Regression Tree as scoring functions to filter the results. Additionally, we
apply rules to further refine the results and enhance precision. Our method is
computationally efficient and achieves an F1 score of 0.535.
Related papers
- Retrieval-Enhanced Named Entity Recognition [1.2187048691454239]
RENER is a technique for named entity recognition using autoregressive language models based on In-Context Learning and information retrieval techniques.
Experimental results show that in the CrossNER collection we achieve state-of-the-art performance with the proposed technique.
arXiv Detail & Related papers (2024-10-17T01:12:48Z) - Retriever-and-Memory: Towards Adaptive Note-Enhanced Retrieval-Augmented Generation [72.70046559930555]
We propose a generic RAG approach called Adaptive Note-Enhanced RAG (Adaptive-Note) for complex QA tasks.
Specifically, Adaptive-Note introduces an overarching view of knowledge growth, iteratively gathering new information in the form of notes.
In addition, we employ an adaptive, note-based stop-exploration strategy to decide "what to retrieve and when to stop" to encourage sufficient knowledge exploration.
arXiv Detail & Related papers (2024-10-11T14:03:29Z) - ReLiK: Retrieve and LinK, Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget [43.35593460866504]
We propose a Retriever-Reader architecture for Entity Linking (EL) and Relation Extraction (RE)
We put forward an innovative input representation that incorporates the candidate entities or relations alongside the text.
Our formulation of EL and RE achieves state-of-the-art performance in both in-domain and out-of-domain benchmarks.
arXiv Detail & Related papers (2024-07-31T18:25:49Z) - Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning [53.241569810013836]
We propose a new framework based on large language models (LLMs) and decision Tree reasoning (OCTree)
Our key idea is to leverage LLMs' reasoning capabilities to find good feature generation rules without manually specifying the search space.
Our empirical results demonstrate that this simple framework consistently enhances the performance of various prediction models.
arXiv Detail & Related papers (2024-06-12T08:31:34Z) - Entity Disambiguation via Fusion Entity Decoding [68.77265315142296]
We propose an encoder-decoder model to disambiguate entities with more detailed entity descriptions.
We observe +1.5% improvements in end-to-end entity linking in the GERBIL benchmark compared with EntQA.
arXiv Detail & Related papers (2024-04-02T04:27:54Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - DAMO-NLP at NLPCC-2022 Task 2: Knowledge Enhanced Robust NER for Speech
Entity Linking [32.915297772110364]
Speech Entity Linking aims to recognize and disambiguate named entities in spoken languages.
Conventional methods suffer from the unfettered speech styles and the noisy transcripts generated by ASR systems.
We propose Knowledge Enhanced Named Entity Recognition (KENER), which focuses on improving robustness through painlessly incorporating proper knowledge in the entity recognition stage.
Our system achieves 1st place in Track 1 and 2nd place in Track 2 of NLPCC-2022 Shared Task 2.
arXiv Detail & Related papers (2022-09-27T06:43:56Z) - Injecting Knowledge Base Information into End-to-End Joint Entity and
Relation Extraction and Coreference Resolution [13.973471173349072]
We study how to inject information from a knowledge base (KB) in such IE model, based on unsupervised entity linking.
The used KB entity representations are learned from either (i) hyperlinked text documents (Wikipedia), or (ii) a knowledge graph (Wikidata)
arXiv Detail & Related papers (2021-07-05T21:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.