Survey on English Entity Linking on Wikidata
- URL: http://arxiv.org/abs/2112.01989v1
- Date: Fri, 3 Dec 2021 16:02:42 GMT
- Title: Survey on English Entity Linking on Wikidata
- Authors: Cedric M\"oller, Jens Lehmann, Ricardo Usbeck
- Abstract summary: Wikidata is a frequently updated, community-driven, and multilingual knowledge graph.
Current Wikidata-specific Entity Linking datasets do not differ in their annotation scheme from schemes for other knowledge graphs like DBpedia.
Almost all approaches employ specific properties like labels and sometimes descriptions but ignore characteristics such as the hyper-relational structure.
- Score: 3.8289963781051415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wikidata is a frequently updated, community-driven, and multilingual
knowledge graph. Hence, Wikidata is an attractive basis for Entity Linking,
which is evident by the recent increase in published papers. This survey
focuses on four subjects: (1) Which Wikidata Entity Linking datasets exist, how
widely used are they and how are they constructed? (2) Do the characteristics
of Wikidata matter for the design of Entity Linking datasets and if so, how?
(3) How do current Entity Linking approaches exploit the specific
characteristics of Wikidata? (4) Which Wikidata characteristics are unexploited
by existing Entity Linking approaches? This survey reveals that current
Wikidata-specific Entity Linking datasets do not differ in their annotation
scheme from schemes for other knowledge graphs like DBpedia. Thus, the
potential for multilingual and time-dependent datasets, naturally suited for
Wikidata, is not lifted. Furthermore, we show that most Entity Linking
approaches use Wikidata in the same way as any other knowledge graph missing
the chance to leverage Wikidata-specific characteristics to increase quality.
Almost all approaches employ specific properties like labels and sometimes
descriptions but ignore characteristics such as the hyper-relational structure.
Hence, there is still room for improvement, for example, by including
hyper-relational graph embeddings or type information. Many approaches also
include information from Wikipedia, which is easily combinable with Wikidata
and provides valuable textual information, which Wikidata lacks.
Related papers
- Towards a Brazilian History Knowledge Graph [50.26735825937335]
We construct a knowledge graph for Brazilian history based on the Brazilian Dictionary of Historical Biographies (DHBB) and Wikipedia/Wikidata.
We show that many terms/entities described in the DHBB do not have corresponding concepts (or Q items) in Wikidata.
arXiv Detail & Related papers (2024-03-28T22:05:32Z) - KIF: A Wikidata-Based Framework for Integrating Heterogeneous Knowledge Sources [0.45141207783683707]
We present a Wikidata-based framework, called KIF, for virtually integrating heterogeneous knowledge sources.
KIF is written in Python and is released as open-source.
arXiv Detail & Related papers (2024-03-15T13:46:36Z) - Leveraging Wikidata's edit history in knowledge graph refinement tasks [77.34726150561087]
edit history represents the process in which the community reaches some kind of fuzzy and distributed consensus.
We build a dataset containing the edit history of every instance from the 100 most important classes in Wikidata.
We propose and evaluate two new methods to leverage this edit history information in knowledge graph embedding models for type prediction tasks.
arXiv Detail & Related papers (2022-10-27T14:32:45Z) - Mapping Process for the Task: Wikidata Statements to Text as Wikipedia
Sentences [68.8204255655161]
We propose our mapping process for the task of converting Wikidata statements to natural language text (WS2T) for Wikipedia projects at the sentence level.
The main step is to organize statements, represented as a group of quadruples and triples, and then to map them to corresponding sentences in English Wikipedia.
We evaluate the output corpus in various aspects: sentence structure analysis, noise filtering, and relationships between sentence components based on word embedding models.
arXiv Detail & Related papers (2022-10-23T08:34:33Z) - Does Wikidata Support Analogical Reasoning? [17.68704739786042]
We investigate whether the knowledge in Wikidata supports analogical reasoning.
We show that Wikidata can be used to create data for analogy classification.
We devise a set of metrics to guide an automatic method for extracting analogies from Wikidata.
arXiv Detail & Related papers (2022-10-02T20:46:52Z) - WikiDes: A Wikipedia-Based Dataset for Generating Short Descriptions
from Paragraphs [66.88232442007062]
We introduce WikiDes, a dataset to generate short descriptions of Wikipedia articles.
The dataset consists of over 80k English samples on 6987 topics.
Our paper shows a practical impact on Wikipedia and Wikidata since there are thousands of missing descriptions.
arXiv Detail & Related papers (2022-09-27T01:28:02Z) - Enriching Wikidata with Linked Open Data [4.311189028205597]
Current linked open data (LOD) tools are not suitable to enrich large graphs like Wikidata.
We present a novel workflow that includes gap detection, source selection, schema alignment, and semantic validation.
Our experiments show that our workflow can enrich Wikidata with millions of novel statements from external LOD sources with a high quality.
arXiv Detail & Related papers (2022-07-01T01:50:24Z) - Improving Candidate Retrieval with Entity Profile Generation for
Wikidata Entity Linking [76.00737707718795]
We propose a novel candidate retrieval paradigm based on entity profiling.
We use the profile to query the indexed search engine to retrieve candidate entities.
Our approach complements the traditional approach of using a Wikipedia anchor-text dictionary.
arXiv Detail & Related papers (2022-02-27T17:38:53Z) - Assessing the quality of sources in Wikidata across languages: a hybrid
approach [64.05097584373979]
We run a series of microtasks experiments to evaluate a large corpus of references, sampled from Wikidata triples with labels in several languages.
We use a consolidated, curated version of the crowdsourced assessments to train several machine learning models to scale up the analysis to the whole of Wikidata.
The findings help us ascertain the quality of references in Wikidata, and identify common challenges in defining and capturing the quality of user-generated multilingual structured data on the web.
arXiv Detail & Related papers (2021-09-20T10:06:46Z) - Commonsense Knowledge in Wikidata [3.8359194344969807]
This paper investigates whether Wikidata con-tains commonsense knowledge which is complementary to existing commonsense sources.
We map the relations of Wikidata to ConceptNet, which we also leverage to integrate Wikidata-CS into an existing consolidated commonsense graph.
arXiv Detail & Related papers (2020-08-18T18:23:06Z) - Wikidata on MARS [0.20305676256390934]
Multi-attributed relational structures (MARSs) have been proposed as a formal data model for generalized property graphs.
MARPL is a useful rule-based logic in which to write inference rules over property graphs.
Wikidata can be modelled in an extended MARS that adds the (imprecise) datatypes of Wikidata.
arXiv Detail & Related papers (2020-08-14T22:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.