Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and
Symbolic Logic Rules
- URL: http://arxiv.org/abs/2109.08544v1
- Date: Fri, 17 Sep 2021 13:40:07 GMT
- Title: Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and
Symbolic Logic Rules
- Authors: Forough Arabshahi, Jennifer Lee, Antoine Bosselut, Yejin Choi, Tom
Mitchell
- Abstract summary: We propose a zero-shot commonsense reasoning system for conversational agents.
Our reasoner uncovers unstated presumptions satisfying a general template of if-(state), then-(action), because-(goal)
We evaluate the model with a user study with human users that achieves a 35% higher success rate compared to SOTA.
- Score: 38.15523098189754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the challenges faced by conversational agents is their inability to
identify unstated presumptions of their users' commands, a task trivial for
humans due to their common sense. In this paper, we propose a zero-shot
commonsense reasoning system for conversational agents in an attempt to achieve
this. Our reasoner uncovers unstated presumptions from user commands satisfying
a general template of if-(state), then-(action), because-(goal). Our reasoner
uses a state-of-the-art transformer-based generative commonsense knowledge base
(KB) as its source of background knowledge for reasoning. We propose a novel
and iterative knowledge query mechanism to extract multi-hop reasoning chains
from the neural KB which uses symbolic logic rules to significantly reduce the
search space. Similar to any KBs gathered to date, our commonsense KB is prone
to missing knowledge. Therefore, we propose to conversationally elicit the
missing knowledge from human users with our novel dynamic question generation
strategy, which generates and presents contextualized queries to human users.
We evaluate the model with a user study with human users that achieves a 35%
higher success rate compared to SOTA.
Related papers
- RECKONING: Reasoning through Dynamic Knowledge Encoding [51.076603338764706]
We show that language models can answer questions by reasoning over knowledge provided as part of the context.
In these situations, the model fails to distinguish the knowledge that is necessary to answer the question.
We propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model's parameters.
arXiv Detail & Related papers (2023-05-10T17:54:51Z) - Multi-hop Commonsense Knowledge Injection Framework for Zero-Shot
Commonsense Question Answering [6.086719709100659]
We propose a novel multi-hop commonsense knowledge injection framework.
Our framework achieves state-of-art performance on five commonsense question answering benchmarks.
arXiv Detail & Related papers (2023-05-10T07:13:47Z) - RHO ($\rho$): Reducing Hallucination in Open-domain Dialogues with
Knowledge Grounding [57.46495388734495]
This paper presents RHO ($rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG)
We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism.
arXiv Detail & Related papers (2022-12-03T10:36:34Z) - ComFact: A Benchmark for Linking Contextual Commonsense Knowledge [31.19689856957576]
We propose the new task of commonsense fact linking, where models are given contexts and trained to identify situationally-relevant commonsense knowledge from KGs.
Our novel benchmark, ComFact, contains 293k in-context relevance annotations for commonsense across four stylistically diverse datasets.
arXiv Detail & Related papers (2022-10-23T09:30:39Z) - ArT: All-round Thinker for Unsupervised Commonsense Question-Answering [54.068032948300655]
We propose an approach of All-round Thinker (ArT) by fully taking association during knowledge generating.
We evaluate it on three commonsense QA benchmarks: COPA, SocialIQA and SCT.
arXiv Detail & Related papers (2021-12-26T18:06:44Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - Personalized Query Rewriting in Conversational AI Agents [7.086654234990377]
We propose a query rewriting approach by leveraging users' historically successful interactions as a form of memory.
We present a neural retrieval model and a pointer-generator network with hierarchical attention and show that they perform significantly better at the query rewriting task with the aforementioned user memories than without.
arXiv Detail & Related papers (2020-11-09T20:45:39Z) - Conversational Neuro-Symbolic Commonsense Reasoning [10.894217510063086]
We present a neuro-symbolic theorem prover that extracts multi-hop reasoning chains.
We also present an interactive conversational framework built on our neuro-symbolic system.
arXiv Detail & Related papers (2020-06-17T17:28:38Z) - Unsupervised Commonsense Question Answering with Self-Talk [71.63983121558843]
We propose an unsupervised framework based on self-talk as a novel alternative to commonsense tasks.
Inspired by inquiry-based discovery learning, our approach inquires language models with a number of information seeking questions.
Empirical results demonstrate that the self-talk procedure substantially improves the performance of zero-shot language model baselines.
arXiv Detail & Related papers (2020-04-11T20:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.