Robust Conversational AI with Grounded Text Generation
- URL: http://arxiv.org/abs/2009.03457v1
- Date: Mon, 7 Sep 2020 23:49:28 GMT
- Title: Robust Conversational AI with Grounded Text Generation
- Authors: Jianfeng Gao, Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh,
Lars Liden, Heung-Yeung Shum
- Abstract summary: GTG is a hybrid model which uses a large-scale Transformer neural network as its backbone.
It generates responses grounded in dialog belief state and real-world knowledge for task completion.
- Score: 77.56950706340767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article presents a hybrid approach based on a Grounded Text Generation
(GTG) model to building robust task bots at scale. GTG is a hybrid model which
uses a large-scale Transformer neural network as its backbone, combined with
symbol-manipulation modules for knowledge base inference and prior knowledge
encoding, to generate responses grounded in dialog belief state and real-world
knowledge for task completion. GTG is pre-trained on large amounts of raw text
and human conversational data, and can be fine-tuned to complete a wide range
of tasks.
The hybrid approach and its variants are being developed simultaneously by
multiple research teams. The primary results reported on task-oriented dialog
benchmarks are very promising, demonstrating the big potential of this
approach. This article provides an overview of this progress and discusses
related methods and technologies that can be incorporated for building robust
conversational AI systems.
Related papers
- Enhancing Text Generation in Joint NLG/NLU Learning Through Curriculum Learning, Semi-Supervised Training, and Advanced Optimization Techniques [0.0]
This research paper developed a novel approach to improve text generation in the context of joint Natural Language Generation (NLG) and Natural Language Understanding (NLU) learning.
The data is prepared by gathering and preprocessing annotated datasets, including cleaning, tokenization, stemming, and stop-word removal.
Transformer-based encoders and decoders, capturing long range dependencies and improving source-target sequence modelling.
Reinforcement learning with policy gradient techniques, semi-supervised training, improved attention mechanisms, and differentiable approximations are employed to fine-tune the models and handle complex linguistic tasks effectively.
arXiv Detail & Related papers (2024-10-17T12:43:49Z) - Distinguishing Chatbot from Human [1.1249583407496218]
We develop a new dataset consisting of more than 750,000 human-written paragraphs.
Based on this dataset, we apply Machine Learning (ML) techniques to determine the origin of text.
Our proposed solutions offer high classification accuracy and serve as useful tools for textual analysis.
arXiv Detail & Related papers (2024-08-03T13:18:04Z) - Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - Adversarial Conversational Shaping for Intelligent Agents [0.0]
We study the performance of two models able to enhance an intelligent conversational agent through adversarial conversational shaping.
One model is able to assign rewards to both partially and fully generated text sequences.
We discuss performance with different training details : seq2seq [ 36] and transformers [37] in a reinforcement learning framework.
arXiv Detail & Related papers (2023-07-20T12:44:47Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z) - An Overview on Controllable Text Generation via Variational
Auto-Encoders [15.97186478109836]
Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans.
Latent variable models (LVM) such as variational auto-encoders (VAEs) are designed to characterize the distributional pattern of textual data.
This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation.
arXiv Detail & Related papers (2022-11-15T07:36:11Z) - Retrieval-Augmented Transformer-XL for Close-Domain Dialog Generation [16.90730526207747]
We present a transformer-based model for multi-turn dialog response generation.
Our solution is based on a hybrid approach which augments a transformer-based generative model with a novel retrieval mechanism.
arXiv Detail & Related papers (2021-05-19T16:34:33Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z) - PALM: Pre-training an Autoencoding&Autoregressive Language Model for
Context-conditioned Generation [92.7366819044397]
Self-supervised pre-training has emerged as a powerful technique for natural language understanding and generation.
This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks.
arXiv Detail & Related papers (2020-04-14T06:25:36Z) - Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation [77.62366712130196]
We present the winning entry at the fast domain adaptation task of DSTC8, a hybrid generative-retrieval model based on GPT-2 fine-tuned to the multi-domain MetaLWOz dataset.
Our model uses retrieval logic as a fallback, being SoTA on MetaLWOz in human evaluation (>4% improvement over the 2nd place system) and attaining competitive generalization performance in adaptation to the unseen MultiWOZ dataset.
arXiv Detail & Related papers (2020-03-03T18:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.