Contribution of Conceptual Modeling to Enhancing Historians' Intuition
-Application to Prosopography
- URL: http://arxiv.org/abs/2011.13276v1
- Date: Thu, 26 Nov 2020 13:21:36 GMT
- Title: Contribution of Conceptual Modeling to Enhancing Historians' Intuition
-Application to Prosopography
- Authors: Jacky Akoka (CEDRIC - ISID, IMT-BS), Isabelle Comyn-Wattiau (CEDRIC -
ISID), St\'ephane Lamass\'e (LAMOP), C\'edric Du Mouza (CEDRIC - ISID)
- Abstract summary: We propose a process that automatically supports historians' intuition in prosopography.
The contribution is threefold: a conceptual data model, a process model, and a set of rules combining the reliability of sources and the credibility of information.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Historians, and in particular researchers in prosopography, focus a lot of
effort on extracting and coding information from historical sources to build
databases. To deal with this situation, they rely in some cases on their
intuition. One important issue is to provide these researchers with the
information extracted from the sources in a sufficiently structured form to
allow the databases to be queried and to verify, and possibly, to validate
hypotheses. The research in this paper attempts to take up the challenge of
helping historians capturing and assessing information throughout automatic
processes. The issue emerges when too many sources of uncertain information are
available. Based on the high-level information fusion approach, we propose a
process that automatically supports historians' intuition in the domain of
prosopography. The contribution is threefold: a conceptual data model, a
process model, and a set of rules combining the reliability of sources and the
credibility of information.
Related papers
- Towards the First Code Contribution: Processes and Information Needs [18.542728636769255]
We argue that much of the information needed by newcomers already exists, albeit scattered among many different sources.
Our findings form an essential step towards automated tool support that provides relevant information to project newcomers.
arXiv Detail & Related papers (2024-04-29T13:19:24Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Information Fusion for Assistance Systems in Production Assessment [49.40442046458756]
We provide a framework for the fusion of n number of information sources using the evidence theory.
We provide a methodology for the information fusion of two primary sources: an ensemble classifier based on machine data and an expert-centered model.
We address the problem of data drift by proposing a methodology to update the data-based models using an evidence theory approach.
arXiv Detail & Related papers (2023-08-31T22:08:01Z) - Identifying Informational Sources in News Articles [109.70475599552523]
We build the largest and widest-ranging annotated dataset of informational sources used in news writing.
We introduce a novel task, source prediction, to study the compositionality of sources in news articles.
arXiv Detail & Related papers (2023-05-24T08:56:35Z) - Leveraging Wikidata's edit history in knowledge graph refinement tasks [77.34726150561087]
edit history represents the process in which the community reaches some kind of fuzzy and distributed consensus.
We build a dataset containing the edit history of every instance from the 100 most important classes in Wikidata.
We propose and evaluate two new methods to leverage this edit history information in knowledge graph embedding models for type prediction tasks.
arXiv Detail & Related papers (2022-10-27T14:32:45Z) - ProVe: A Pipeline for Automated Provenance Verification of Knowledge
Graphs against Textual Sources [5.161088104035106]
ProVe is a pipelined approach that automatically verifies whether a Knowledge Graph triple is supported by text extracted from its documented provenance.
ProVe is evaluated on a Wikidata dataset, achieving promising results overall and excellent performance on the binary classification task of detecting support from provenance.
arXiv Detail & Related papers (2022-10-26T16:47:36Z) - Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog [12.081212540168055]
We present a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance.
In line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources.
We demonstrate that our model is robust to perturbations to knowledge modality (source of information) and that it can fuse information from structured as well as unstructured knowledge to generate responses.
arXiv Detail & Related papers (2022-10-13T18:49:59Z) - Embedding Knowledge for Document Summarization: A Survey [66.76415502727802]
Previous works proved that knowledge-embedded document summarizers excel at generating superior digests.
We propose novel to recapitulate knowledge and knowledge embeddings under the document summarization view.
arXiv Detail & Related papers (2022-04-24T04:36:07Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - InSRL: A Multi-view Learning Framework Fusing Multiple Information
Sources for Distantly-supervised Relation Extraction [19.176183245280267]
We introduce two widely-existing sources in knowledge bases, namely entity descriptions and multi-grained entity types.
An end-to-end multi-view learning framework is proposed for relation extraction via Intact Space Representation Learning (InSRL)
arXiv Detail & Related papers (2020-12-17T02:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.