The design and implementation of Language Learning Chatbot with XAI
using Ontology and Transfer Learning
- URL: http://arxiv.org/abs/2009.13984v1
- Date: Tue, 29 Sep 2020 13:11:40 GMT
- Title: The design and implementation of Language Learning Chatbot with XAI
using Ontology and Transfer Learning
- Authors: Nuobei Shi, Qin Zeng and Raymond Lee
- Abstract summary: We design three levels for systematically English learning, including phonetics level for speech recognition and pronunciation correction, semantic level for specific domain conversation, and simulation of free-style conversation in English.
Our Language Learning agent integrated the mini-program in WeChat as front-end, and fine-tuned GPT-2 model of transfer learning as back-end to interpret the responses by ontology graph.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we proposed a transfer learning-based English language
learning chatbot, whose output generated by GPT-2 can be explained by
corresponding ontology graph rooted by fine-tuning dataset. We design three
levels for systematically English learning, including phonetics level for
speech recognition and pronunciation correction, semantic level for specific
domain conversation, and the simulation of free-style conversation in English -
the highest level of language chatbot communication as free-style conversation
agent. For academic contribution, we implement the ontology graph to explain
the performance of free-style conversation, following the concept of XAI
(Explainable Artificial Intelligence) to visualize the connections of neural
network in bionics, and explain the output sentence from language model. From
implementation perspective, our Language Learning agent integrated the
mini-program in WeChat as front-end, and fine-tuned GPT-2 model of transfer
learning as back-end to interpret the responses by ontology graph.
Related papers
- Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Curriculum-Driven Edubot: A Framework for Developing Language Learning Chatbots Through Synthesizing Conversational Data [23.168347070904318]
We present Curriculum-Driven EduBot, a framework for developing a chatbots that combines the interactive features of chatbots with the systematic material of English textbooks.
We begin by extracting pertinent topics from textbooks and using large language models to generate dialogues related to these topics.
arXiv Detail & Related papers (2023-09-28T19:14:18Z) - ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text
Translation [79.66359274050885]
We present ComSL, a speech-language model built atop a composite architecture of public pretrained speech-only and language-only models.
Our approach has demonstrated effectiveness in end-to-end speech-to-text translation tasks.
arXiv Detail & Related papers (2023-05-24T07:42:15Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Scheduled Multi-task Learning for Neural Chat Translation [66.81525961469494]
We propose a scheduled multi-task learning framework for Neural Chat Translation (NCT)
Specifically, we devise a three-stage training framework to incorporate the large-scale in-domain chat translation data into training.
Extensive experiments in four language directions verify the effectiveness and superiority of the proposed approach.
arXiv Detail & Related papers (2022-05-08T02:57:28Z) - An Approach to Inference-Driven Dialogue Management within a Social
Chatbot [10.760026478889667]
Instead of framing conversation as a sequence of response generation tasks, we model conversation as a collaborative inference process.
Our pipeline accomplishes this modelling in three broad stages.
This approach lends itself to understanding latent semantics of user inputs, flexible initiative taking, and responses that are novel and coherent with the dialogue context.
arXiv Detail & Related papers (2021-10-31T19:01:07Z) - Sm{\aa}prat: DialoGPT for Natural Language Generation of Swedish
Dialogue by Transfer Learning [1.6111818380407035]
State-of-the-art models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.
This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language.
arXiv Detail & Related papers (2021-10-12T18:46:43Z) - Spoken Style Learning with Multi-modal Hierarchical Context Encoding for
Conversational Text-to-Speech Synthesis [59.27994987902646]
The study about learning spoken styles from historical conversations is still in its infancy.
Only the transcripts of the historical conversations are considered, which neglects the spoken styles in historical speeches.
We propose a spoken style learning approach with multi-modal hierarchical context encoding.
arXiv Detail & Related papers (2021-06-11T08:33:52Z) - Pre-training for Spoken Language Understanding with Joint Textual and
Phonetic Representation Learning [4.327558819000435]
We propose a novel joint textual-phonetic pre-training approach for learning spoken language representations.
Experimental results on spoken language understanding benchmarks, Fluent Speech Commands and SNIPS, show that the proposed approach significantly outperforms strong baseline models.
arXiv Detail & Related papers (2021-04-21T05:19:13Z) - Interactive Teaching for Conversational AI [2.5259192787433706]
Current conversational AI systems aim to understand a set of pre-designed requests and execute related actions.
Motivated by how children learn their first language interacting with adults, this paper describes a new Teachable AI system.
It is capable of learning new language nuggets called concepts, directly from end users using live interactive teaching sessions.
arXiv Detail & Related papers (2020-12-02T04:08:49Z) - SPLAT: Speech-Language Joint Pre-Training for Spoken Language
Understanding [61.02342238771685]
Spoken language understanding requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.
Various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.
We propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.
arXiv Detail & Related papers (2020-10-05T19:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.