An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented
Dialogue Generation
- URL: http://arxiv.org/abs/2203.05843v1
- Date: Fri, 11 Mar 2022 10:44:08 GMT
- Title: An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented
Dialogue Generation
- Authors: Shiquan Yang, Rui Zhang, Sarah Erfani, Jey Han Lau
- Abstract summary: We introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.
We propose a two-phase approach that consists of a hypothesis generator and a reasoner.
The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations.
- Score: 21.106357884651363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the interpretability issue of task-oriented dialogue systems in this
paper. Previously, most neural-based task-oriented dialogue systems employ an
implicit reasoning strategy that makes the model predictions uninterpretable to
humans. To obtain a transparent reasoning process, we introduce neuro-symbolic
to perform explicit reasoning that justifies model decisions by reasoning
chains. Since deriving reasoning chains requires multi-hop reasoning for
task-oriented dialogues, existing neuro-symbolic approaches would induce error
propagation due to the one-phase design. To overcome this, we propose a
two-phase approach that consists of a hypothesis generator and a reasoner. We
first obtain multiple hypotheses, i.e., potential operations to perform the
desired task, through the hypothesis generator. Each hypothesis is then
verified by the reasoner, and the valid one is selected to conduct the final
prediction. The whole system is trained by exploiting raw textual dialogues
without using any reasoning chain annotations. Experimental studies on two
public benchmark datasets demonstrate that the proposed approach not only
achieves better results, but also introduces an interpretable decision process.
Related papers
- Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Towards Trustworthy Explanation: On Causal Rationalization [9.48539398357156]
We propose a new model of rationalization based on two causal desiderata, non-spuriousness and efficiency.
The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets.
arXiv Detail & Related papers (2023-06-25T03:34:06Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Natural Language Deduction through Search over Statement Compositions [43.93269297653265]
We propose a system for natural language deduction that decomposes the task into separate steps coordinated by best-first search.
Our experiments demonstrate that the proposed system can better distinguish verifiable hypotheses from unverifiable ones.
arXiv Detail & Related papers (2022-01-16T12:05:48Z) - Interactive Model with Structural Loss for Language-based Abductive
Reasoning [36.02450824915494]
The abductive natural language inference task ($alpha$NLI) is proposed to infer the most plausible explanation between the cause and the event.
We name this new model for $alpha$NLI: Interactive Model with Structural Loss (IMSL)
Our IMSL has achieved the highest performance on the RoBERTa-large pretrained model, with ACC and AUC results increased by about 1% and 5% respectively.
arXiv Detail & Related papers (2021-12-01T05:21:07Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z) - Modeling Voting for System Combination in Machine Translation [92.09572642019145]
We propose an approach to modeling voting for system combination in machine translation.
Our approach combines the advantages of statistical and neural methods since it can not only analyze the relations between hypotheses but also allow for end-to-end training.
arXiv Detail & Related papers (2020-07-14T09:59:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.