TruthBot: An Automated Conversational Tool for Intent Learning, Curated
Information Presenting, and Fake News Alerting
- URL: http://arxiv.org/abs/2102.00509v1
- Date: Sun, 31 Jan 2021 18:23:05 GMT
- Title: TruthBot: An Automated Conversational Tool for Intent Learning, Curated
Information Presenting, and Fake News Alerting
- Authors: Ankur Gupta, Yash Varun, Prarthana Das, Nithya Muttineni, Parth
Srivastava, Hamim Zafar, Tanmoy Chakraborty, Swaprava Nath
- Abstract summary: TruthBot is designed for seeking truth (trustworthy and verified information) on specific topics.
It helps users to obtain information specific to certain topics, fact-check information, and get recent news.
TruthBot has been deployed in June 2020 and is currently running.
- Score: 12.95006904081387
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present TruthBot, an all-in-one multilingual conversational chatbot
designed for seeking truth (trustworthy and verified information) on specific
topics. It helps users to obtain information specific to certain topics,
fact-check information, and get recent news. The chatbot learns the intent of a
query by training a deep neural network from the data of the previous intents
and responds appropriately when it classifies the intent in one of the classes
above. Each class is implemented as a separate module that uses either its own
curated knowledge-base or searches the web to obtain the correct information.
The topic of the chatbot is currently set to COVID-19. However, the bot can be
easily customized to any topic-specific responses. Our experimental results
show that each module performs significantly better than its closest
competitor, which is verified both quantitatively and through several
user-based surveys in multiple languages. TruthBot has been deployed in June
2020 and is currently running.
Related papers
- Deep Learning Based Amharic Chatbot for FAQs in Universities [0.0]
This paper proposes a model that answers frequently asked questions (FAQs) in the Amharic language.
The proposed program employs tokenization, stop word removal, and stemming to analyze and categorize Amharic input sentences.
The model was integrated with Facebook Messenger and deployed on a Heroku server for 24-hour accessibility.
arXiv Detail & Related papers (2024-01-26T18:37:21Z) - NewsDialogues: Towards Proactive News Grounded Conversation [72.10055780635625]
We propose a novel task, Proactive News Grounded Conversation, in which a dialogue system can proactively lead the conversation based on some key topics of the news.
To further develop this novel task, we collect a human-to-human Chinese dialogue dataset tsNewsDialogues, which includes 1K conversations with a total of 14.6K utterances.
arXiv Detail & Related papers (2023-08-12T08:33:42Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Leveraging Large Language Models to Power Chatbots for Collecting User
Self-Reported Data [15.808841433843742]
Large language models (LLMs) provide a new way to build chatbots by accepting natural language prompts.
We explore what design factors of prompts can help steer chatbots to talk naturally and collect data reliably.
arXiv Detail & Related papers (2023-01-14T07:29:36Z) - Implementing a Chatbot Solution for Learning Management System [0.0]
One of the main problem that chatbots face today is to mimic human language.
Extreme programming methodology was chosen to use integrate ChatterBot, Pyside2, web scraping and Tampermonkey into Blackboard.
We showed the plausibility of integrating an AI bot in an educational setting.
arXiv Detail & Related papers (2022-06-27T11:04:42Z) - A Deep Learning Approach to Integrate Human-Level Understanding in a
Chatbot [0.4632366780742501]
Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second.
We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence.
arXiv Detail & Related papers (2021-12-31T22:26:41Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.