Knowledge Graph Quality Evaluation under Incomplete Information
- URL: http://arxiv.org/abs/2212.00994v3
- Date: Wed, 12 Apr 2023 07:53:54 GMT
- Title: Knowledge Graph Quality Evaluation under Incomplete Information
- Authors: Xiaodong Li, Chenxin Zou, Yi Cai, Yuelong Zhu
- Abstract summary: We propose a knowledge graph quality evaluation framework under incomplete information (QEII)
The quality evaluation task is transformed into an adversarial Q&A game between two KGs.
During the evaluation process, no raw data is exposed, which ensures information protection.
- Score: 9.48089663504665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) have attracted more and more attentions because of
their fundamental roles in many tasks. Quality evaluation for KGs is thus
crucial and indispensable. Existing methods in this field evaluate KGs by
either proposing new quality metrics from different dimensions or measuring
performances at KG construction stages. However, there are two major issues
with those methods. First, they highly rely on raw data in KGs, which makes
KGs' internal information exposed during quality evaluation. Second, they
consider more about the quality at data level instead of ability level, where
the latter one is more important for downstream applications. To address these
issues, we propose a knowledge graph quality evaluation framework under
incomplete information (QEII). The quality evaluation task is transformed into
an adversarial Q&A game between two KGs. Winner of the game is thus considered
to have better qualities. During the evaluation process, no raw data is
exposed, which ensures information protection. Experimental results on four
pairs of KGs demonstrate that, compared with baselines, the QEII implements a
reasonable quality evaluation at ability level under incomplete information.
Related papers
- Multi-Facet Counterfactual Learning for Content Quality Evaluation [48.73583736357489]
We propose a framework for efficiently constructing evaluators that perceive multiple facets of content quality evaluation.
We leverage a joint training strategy based on contrastive learning and supervised learning to enable the evaluator to distinguish between different quality facets.
arXiv Detail & Related papers (2024-10-10T08:04:10Z) - Exploring Rich Subjective Quality Information for Image Quality Assessment in the Wild [66.40314964321557]
We propose a novel IQA method named RichIQA to explore the rich subjective rating information beyond MOS to predict image quality in the wild.
RichIQA is characterized by two key novel designs: (1) a three-stage image quality prediction network which exploits the powerful feature representation capability of the Convolutional vision Transformer (CvT) and mimics the short-term and long-term memory mechanisms of human brain.
RichIQA outperforms state-of-the-art competitors on multiple large-scale in the wild IQA databases with rich subjective rating labels.
arXiv Detail & Related papers (2024-09-09T12:00:17Z) - Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Evaluating the Knowledge Dependency of Questions [12.25396414711877]
We propose a novel automatic evaluation metric, coined Knowledge Dependent Answerability (KDA)
We first show how to measure KDA based on student responses from a human survey.
Then, we propose two automatic evaluation metrics, KDA_disc and KDA_cont, that approximate KDA by leveraging pre-trained language models to imitate students' problem-solving behavior.
arXiv Detail & Related papers (2022-11-21T23:08:30Z) - Knowledge Graph Curation: A Practical Framework [0.0]
We propose a practical knowledge graph curation framework for improving the quality of KGs.
First, we define a set of quality metrics for assessing the status of KGs.
Second, we describe the verification and validation of KGs as cleaning tasks.
Third, we present duplicate detection and knowledge fusion strategies for enriching KGs.
arXiv Detail & Related papers (2022-08-17T07:55:28Z) - Knowledge Graph Question Answering Leaderboard: A Community Resource to
Prevent a Replication Crisis [61.740077541531726]
We provide a new central and open leaderboard for any KGQA benchmark dataset as a focal point for the community.
Our analysis highlights existing problems during the evaluation of KGQA systems.
arXiv Detail & Related papers (2022-01-20T13:46:01Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Object-QA: Towards High Reliable Object Quality Assessment [71.71188284059203]
In object recognition applications, object images usually appear with different quality levels.
We propose an effective approach named Object-QA to assess high-reliable quality scores for object images.
arXiv Detail & Related papers (2020-05-27T01:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.