Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path
Grounding
- URL: http://arxiv.org/abs/2104.08455v1
- Date: Sat, 17 Apr 2021 05:23:44 GMT
- Title: Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path
Grounding
- Authors: Nouha Dziri, Andrea Madotto, Osmar Zaiane, Avishek Joey Bose
- Abstract summary: We focus on the task of improving the faithfulness of Neural Dialogue Systems to known facts supplied by a Knowledge Graph (KG)
We propose Neural Path Hunter which follows a generate-then-refine strategy whereby a generated response is amended using the k-hop subgraph of a KG.
Our proposed model can easily be applied to any dialogue generated responses without retraining the model.
- Score: 15.62141731259161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue systems powered by large pre-trained language models (LM) exhibit an
innate ability to deliver fluent and natural-looking responses. Despite their
impressive generation performance, these models can often generate factually
incorrect statements impeding their widespread adoption. In this paper, we
focus on the task of improving the faithfulness -- and thus reduce
hallucination -- of Neural Dialogue Systems to known facts supplied by a
Knowledge Graph (KG). We propose Neural Path Hunter which follows a
generate-then-refine strategy whereby a generated response is amended using the
k-hop subgraph of a KG. Neural Path Hunter leverages a separate token-level
fact critic to identify plausible sources of hallucination followed by a
refinement stage consisting of a chain of two neural LM's that retrieves
correct entities by crafting a query signal that is propagated over the k-hop
subgraph. Our proposed model can easily be applied to any dialogue generated
responses without retraining the model. We empirically validate our proposed
approach on the OpenDialKG dataset against a suite of metrics and report a
relative improvement of faithfulness over GPT2 dialogue responses by 8.4%.
Related papers
- A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation [51.53917938874146]
We propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction.
Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance.
arXiv Detail & Related papers (2024-04-04T14:45:26Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models [81.56455625624041]
We introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction.
The proposed benchmark contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs of N-best hypotheses.
LLMs with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list.
arXiv Detail & Related papers (2023-09-27T14:44:10Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Diving Deep into Modes of Fact Hallucinations in Dialogue Systems [2.8360662552057323]
Knowledge Graph(KG) grounded conversations often use large pre-trained models and usually suffer from fact hallucination.
We build an entity-level hallucination detection system, which would provide fine-grained signals that control fallacious content while generating responses.
arXiv Detail & Related papers (2023-01-11T13:08:57Z) - RHO ($\rho$): Reducing Hallucination in Open-domain Dialogues with
Knowledge Grounding [57.46495388734495]
This paper presents RHO ($rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG)
We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism.
arXiv Detail & Related papers (2022-12-03T10:36:34Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.