Class Order Disorder in Wikidata and First Fixes
- URL: http://arxiv.org/abs/2411.15550v1
- Date: Sat, 23 Nov 2024 13:15:13 GMT
- Title: Class Order Disorder in Wikidata and First Fixes
- Authors: Peter F. Patel-Schneider, Ege Atacan Doğan,
- Abstract summary: SPARQL queries were evaluated against Wikidata to determine the prevalence of several kinds of violations and suspect information.
Suggestions are provided on how the problems might be addressed, either though better tooling or involvement of the Wikidata community.
- Score: 0.0
- License:
- Abstract: Wikidata has a large ontology with classes at several orders. The Wikidata ontology has long been known to have violations of class order and information related to class order that appears suspect. SPARQL queries were evaluated against Wikidata to determine the prevalence of several kinds of violations and suspect information and the results analyzed. Some changes were manually made to Wikidata to remove some of these results and the queries rerun, showing the effect of the changes. Suggestions are provided on how the problems uncovered might be addressed, either though better tooling or involvement of the Wikidata community.
Related papers
- Disjointness Violations in Wikidata [0.0]
We analyze the current modeling of disjointness on Wikidata.
We use SPARQL queries to identify each culprit'' causing a disjointness violation and lay out formulas to identify and fix conflicting information.
arXiv Detail & Related papers (2024-10-17T16:07:51Z) - Leveraging Wikidata's edit history in knowledge graph refinement tasks [77.34726150561087]
edit history represents the process in which the community reaches some kind of fuzzy and distributed consensus.
We build a dataset containing the edit history of every instance from the 100 most important classes in Wikidata.
We propose and evaluate two new methods to leverage this edit history information in knowledge graph embedding models for type prediction tasks.
arXiv Detail & Related papers (2022-10-27T14:32:45Z) - Mapping Process for the Task: Wikidata Statements to Text as Wikipedia
Sentences [68.8204255655161]
We propose our mapping process for the task of converting Wikidata statements to natural language text (WS2T) for Wikipedia projects at the sentence level.
The main step is to organize statements, represented as a group of quadruples and triples, and then to map them to corresponding sentences in English Wikipedia.
We evaluate the output corpus in various aspects: sentence structure analysis, noise filtering, and relationships between sentence components based on word embedding models.
arXiv Detail & Related papers (2022-10-23T08:34:33Z) - WikiDes: A Wikipedia-Based Dataset for Generating Short Descriptions
from Paragraphs [66.88232442007062]
We introduce WikiDes, a dataset to generate short descriptions of Wikipedia articles.
The dataset consists of over 80k English samples on 6987 topics.
Our paper shows a practical impact on Wikipedia and Wikidata since there are thousands of missing descriptions.
arXiv Detail & Related papers (2022-09-27T01:28:02Z) - Improving Candidate Retrieval with Entity Profile Generation for
Wikidata Entity Linking [76.00737707718795]
We propose a novel candidate retrieval paradigm based on entity profiling.
We use the profile to query the indexed search engine to retrieve candidate entities.
Our approach complements the traditional approach of using a Wikipedia anchor-text dictionary.
arXiv Detail & Related papers (2022-02-27T17:38:53Z) - Wikidated 1.0: An Evolving Knowledge Graph Dataset of Wikidata's
Revision History [5.727994421498849]
We present Wikidated 1.0, a dataset of Wikidata's full revision history.
To the best of our knowledge, it constitutes the first large dataset of an evolving knowledge graph.
arXiv Detail & Related papers (2021-12-09T15:54:03Z) - Survey on English Entity Linking on Wikidata [3.8289963781051415]
Wikidata is a frequently updated, community-driven, and multilingual knowledge graph.
Current Wikidata-specific Entity Linking datasets do not differ in their annotation scheme from schemes for other knowledge graphs like DBpedia.
Almost all approaches employ specific properties like labels and sometimes descriptions but ignore characteristics such as the hyper-relational structure.
arXiv Detail & Related papers (2021-12-03T16:02:42Z) - A Chinese Multi-type Complex Questions Answering Dataset over Wikidata [45.31495982252219]
Complex Knowledge Base Question Answering is a popular area of research in the past decade.
Recent public datasets have led to encouraging results in this field, but are mostly limited to English.
Few state-of-the-art KBQA models are trained on Wikidata, one of the most popular real-world knowledge bases.
We propose CLC-QuAD, the first large scale complex Chinese semantic parsing dataset over Wikidata to address these challenges.
arXiv Detail & Related papers (2021-11-11T07:39:16Z) - Assessing the quality of sources in Wikidata across languages: a hybrid
approach [64.05097584373979]
We run a series of microtasks experiments to evaluate a large corpus of references, sampled from Wikidata triples with labels in several languages.
We use a consolidated, curated version of the crowdsourced assessments to train several machine learning models to scale up the analysis to the whole of Wikidata.
The findings help us ascertain the quality of references in Wikidata, and identify common challenges in defining and capturing the quality of user-generated multilingual structured data on the web.
arXiv Detail & Related papers (2021-09-20T10:06:46Z) - Creating and Querying Personalized Versions of Wikidata on a Laptop [0.7449724123186383]
This paper introduces KGTK Kypher, a query language and processor that allows users to create personalized variants of Wikidata on a laptop.
We present several use cases that illustrate the types of analyses that Kypher enables users to run on the full Wikidata KG on a laptop.
arXiv Detail & Related papers (2021-08-06T00:00:33Z) - Wikidata on MARS [0.20305676256390934]
Multi-attributed relational structures (MARSs) have been proposed as a formal data model for generalized property graphs.
MARPL is a useful rule-based logic in which to write inference rules over property graphs.
Wikidata can be modelled in an extended MARS that adds the (imprecise) datatypes of Wikidata.
arXiv Detail & Related papers (2020-08-14T22:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.