Towards a Brazilian History Knowledge Graph
- URL: http://arxiv.org/abs/2403.19856v1
- Date: Thu, 28 Mar 2024 22:05:32 GMT
- Title: Towards a Brazilian History Knowledge Graph
- Authors: Valeria de Paiva, Alexandre Rademaker,
- Abstract summary: We construct a knowledge graph for Brazilian history based on the Brazilian Dictionary of Historical Biographies (DHBB) and Wikipedia/Wikidata.
We show that many terms/entities described in the DHBB do not have corresponding concepts (or Q items) in Wikidata.
- Score: 50.26735825937335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This short paper describes the first steps in a project to construct a knowledge graph for Brazilian history based on the Brazilian Dictionary of Historical Biographies (DHBB) and Wikipedia/Wikidata. We contend that large repositories of Brazilian-named entities (people, places, organizations, and political events and movements) would be beneficial for extracting information from Portuguese texts. We show that many of the terms/entities described in the DHBB do not have corresponding concepts (or Q items) in Wikidata, the largest structured database of entities associated with Wikipedia. We describe previous work on extracting information from the DHBB and outline the steps to construct a Wikidata-based historical knowledge graph.
Related papers
- Leveraging Wikidata's edit history in knowledge graph refinement tasks [77.34726150561087]
edit history represents the process in which the community reaches some kind of fuzzy and distributed consensus.
We build a dataset containing the edit history of every instance from the 100 most important classes in Wikidata.
We propose and evaluate two new methods to leverage this edit history information in knowledge graph embedding models for type prediction tasks.
arXiv Detail & Related papers (2022-10-27T14:32:45Z) - Does Wikidata Support Analogical Reasoning? [17.68704739786042]
We investigate whether the knowledge in Wikidata supports analogical reasoning.
We show that Wikidata can be used to create data for analogy classification.
We devise a set of metrics to guide an automatic method for extracting analogies from Wikidata.
arXiv Detail & Related papers (2022-10-02T20:46:52Z) - WikiDes: A Wikipedia-Based Dataset for Generating Short Descriptions
from Paragraphs [66.88232442007062]
We introduce WikiDes, a dataset to generate short descriptions of Wikipedia articles.
The dataset consists of over 80k English samples on 6987 topics.
Our paper shows a practical impact on Wikipedia and Wikidata since there are thousands of missing descriptions.
arXiv Detail & Related papers (2022-09-27T01:28:02Z) - Connecting a French Dictionary from the Beginning of the 20th Century to
Wikidata [0.0]
The textitPetit Larousse illustr'e is a French dictionary first published in 1905.
We describe a new lexical resource, where we connected all the dictionary entries of the history and geography part to current data sources.
Using the wikidata links, we can automate more easily the identification, comparison, and verification of historically-situated representations.
arXiv Detail & Related papers (2022-06-22T12:45:21Z) - Improving Candidate Retrieval with Entity Profile Generation for
Wikidata Entity Linking [76.00737707718795]
We propose a novel candidate retrieval paradigm based on entity profiling.
We use the profile to query the indexed search engine to retrieve candidate entities.
Our approach complements the traditional approach of using a Wikipedia anchor-text dictionary.
arXiv Detail & Related papers (2022-02-27T17:38:53Z) - Wikidated 1.0: An Evolving Knowledge Graph Dataset of Wikidata's
Revision History [5.727994421498849]
We present Wikidated 1.0, a dataset of Wikidata's full revision history.
To the best of our knowledge, it constitutes the first large dataset of an evolving knowledge graph.
arXiv Detail & Related papers (2021-12-09T15:54:03Z) - Open Domain Question Answering over Virtual Documents: A Unified
Approach for Data and Text [62.489652395307914]
We use the data-to-text method as a means for encoding structured knowledge for knowledge-intensive applications, i.e. open-domain question answering (QA)
Specifically, we propose a verbalizer-retriever-reader framework for open-domain QA over data and text where verbalized tables from Wikipedia and triples from Wikidata are used as augmented knowledge sources.
We show that our Unified Data and Text QA, UDT-QA, can effectively benefit from the expanded knowledge index, leading to large gains over text-only baselines.
arXiv Detail & Related papers (2021-10-16T00:11:21Z) - Assessing the quality of sources in Wikidata across languages: a hybrid
approach [64.05097584373979]
We run a series of microtasks experiments to evaluate a large corpus of references, sampled from Wikidata triples with labels in several languages.
We use a consolidated, curated version of the crowdsourced assessments to train several machine learning models to scale up the analysis to the whole of Wikidata.
The findings help us ascertain the quality of references in Wikidata, and identify common challenges in defining and capturing the quality of user-generated multilingual structured data on the web.
arXiv Detail & Related papers (2021-09-20T10:06:46Z) - Named Entity Recognition and Linking Augmented with Large-Scale
Structured Data [3.211619859724085]
We describe our submissions to the 2nd and 3rd SlavNER Shared Tasks held at BSNLP 2019 and BSNLP 2021.
The tasks focused on the analysis of Named Entities in multilingual Web documents in Slavic languages with rich inflection.
Our solution takes advantage of large collections of both unstructured and structured documents.
arXiv Detail & Related papers (2021-04-27T20:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.