Reasoning on Knowledge Graphs with Debate Dynamics
- URL: http://arxiv.org/abs/2001.00461v1
- Date: Thu, 2 Jan 2020 14:44:23 GMT
- Title: Reasoning on Knowledge Graphs with Debate Dynamics
- Authors: Marcel Hildebrandt, Jorge Andres Quintero Serna, Yunpu Ma, Martin
Ringsquandl, Mitchell Joblin, Volker Tresp
- Abstract summary: We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics.
The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents.
We benchmark our method on the triple classification and link prediction task.
- Score: 27.225048123690243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for automatic reasoning on knowledge graphs based
on debate dynamics. The main idea is to frame the task of triple classification
as a debate game between two reinforcement learning agents which extract
arguments -- paths in the knowledge graph -- with the goal to promote the fact
being true (thesis) or the fact being false (antithesis), respectively. Based
on these arguments, a binary classifier, called the judge, decides whether the
fact is true or false. The two agents can be considered as sparse, adversarial
feature generators that present interpretable evidence for either the thesis or
the antithesis. In contrast to other black-box methods, the arguments allow
users to get an understanding of the decision of the judge. Since the focus of
this work is to create an explainable method that maintains a competitive
predictive accuracy, we benchmark our method on the triple classification and
link prediction task. Thereby, we find that our method outperforms several
baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also
conduct a survey and find that the extracted arguments are informative for
users.
Related papers
- Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - How to disagree well: Investigating the dispute tactics used on
Wikipedia [17.354674873244335]
We propose a framework of dispute tactics that unifies the perspectives of detecting toxicity and analysing argument structure.
This framework includes a preferential ordering among rebuttal-type tactics, ranging from ad hominem attacks to refuting the central argument.
We show that these annotations can be used to provide useful additional signals to improve performance on the task of predicting escalation.
arXiv Detail & Related papers (2022-12-16T09:01:19Z) - Robust and Explainable Identification of Logical Fallacies in Natural
Language Arguments [5.850977561881791]
We formalize prior theoretical work on logical fallacies into a comprehensive three-stage evaluation framework.
We employ three families of robust and explainable methods based on prototype reasoning, instance-based reasoning, and knowledge injection.
We extensively evaluate these methods on our datasets, focusing on their robustness and explainability.
arXiv Detail & Related papers (2022-12-12T20:27:17Z) - Explaining Image Classification with Visual Debates [26.76139301708958]
We propose a novel debate framework for understanding and explaining a continuous image classifier's reasoning for making a particular prediction.
Our framework encourages players to put forward diverse arguments during the debates, picking up the reasoning trails missed by their opponents.
We demonstrate and evaluate (a practical realization) our Visual Debates on the geometric SHAPE and MNIST datasets.
arXiv Detail & Related papers (2022-10-17T12:35:52Z) - Ask to Know More: Generating Counterfactual Explanations for Fake Claims [11.135087647482145]
We propose elucidating fact checking predictions using counterfactual explanations to help people understand why a piece of news was identified as fake.
In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately.
Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T04:42:00Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision [60.62434362997016]
We propose a differentiable training-framework to create models which output faithful rationales on a sentence level.
Our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best.
arXiv Detail & Related papers (2020-10-07T12:54:28Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z) - Debate Dynamics for Human-comprehensible Fact-checking on Knowledge
Graphs [27.225048123690243]
We propose a novel method for fact-checking on knowledge graphs based on debate dynamics.
The underlying idea is to frame the task of triple classification as a debate game between two reinforcement learning agents.
Our method allows for interactive reasoning on knowledge graphs where the users can raise additional arguments or evaluate the debate taking common sense reasoning and external information into account.
arXiv Detail & Related papers (2020-01-09T15:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.