On Safe and Usable Chatbots for Promoting Voter Participation
- URL: http://arxiv.org/abs/2212.11219v2
- Date: Wed, 28 Dec 2022 08:09:45 GMT
- Title: On Safe and Usable Chatbots for Promoting Voter Participation
- Authors: Bharath Muppasani, Vishal Pallagani, Kausik Lakkaraju, Shuge Lei,
Biplav Srivastava, Brett Robertson, Andrea Hickerson, Vignesh Narayanan
- Abstract summary: We build a system that amplifies official information while personalizing it to users' unique needs.
Our approach can be a win-win for voters, election agencies trying to fulfill their mandate and democracy at large.
- Score: 8.442334707366173
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Chatbots, or bots for short, are multi-modal collaborative assistants that
can help people complete useful tasks. Usually, when chatbots are referenced in
connection with elections, they often draw negative reactions due to the fear
of mis-information and hacking. Instead, in this paper, we explore how chatbots
may be used to promote voter participation in vulnerable segments of society
like senior citizens and first-time voters. In particular, we build a system
that amplifies official information while personalizing it to users' unique
needs transparently. We discuss its design, build prototypes with frequently
asked questions (FAQ) election information for two US states that are low on an
ease-of-voting scale, and report on its initial evaluation in a focus group.
Our approach can be a win-win for voters, election agencies trying to fulfill
their mandate and democracy at large.
Related papers
- Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards [93.16294577018482]
Arena, the most popular benchmark of this type, ranks models by asking users to select the better response between two randomly selected models.
We show that an attacker can alter the leaderboard (to promote their favorite model or demote competitors) at the cost of roughly a thousand votes.
Our attack consists of two steps: first, we show how an attacker can determine which model was used to generate a given reply with more than $95%$ accuracy; and then, the attacker can use this information to consistently vote against a target model.
arXiv Detail & Related papers (2025-01-13T17:12:38Z) - Sleeper Social Bots: a new generation of AI disinformation bots are already a political threat [0.0]
"Sleeper social bots" are AI-driven social bots created to spread disinformation and manipulate public opinion.
Preliminary findings suggest these bots can convincingly pass as human users, actively participate in conversations, and effectively disseminate disinformation.
The implications of our research point to the significant challenges posed by social bots in the upcoming 2024 U.S. presidential election and beyond.
arXiv Detail & Related papers (2024-08-07T19:57:10Z) - Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference [48.99117537559644]
We introduce Arena, an open platform for evaluating Large Language Models (LLMs) based on human preferences.
Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing.
This paper describes the platform, analyzes the data we have collected so far, and explains the tried-and-true statistical methods we are using.
arXiv Detail & Related papers (2024-03-07T01:22:38Z) - VoteLab: A Modular and Adaptive Experimentation Platform for Online Collective Decision Making [0.9503786527351696]
This paper introduces VoteLab, an open-source platform for modular and adaptive design of voting experiments.
It supports to visually and interactively build reusable campaigns with a choice of different voting methods.
Voters can easily respond to subscribed voting questions on a smartphone.
arXiv Detail & Related papers (2023-07-20T14:26:21Z) - Bots don't Vote, but They Surely Bother! A Study of Anomalous Accounts
in a National Referendum [1.5609988622100526]
We present a characterization of the discussion on Twitter about the 2020 Chilean constitutional referendum.
The characterization uses a profile-oriented analysis that enables the isolation of anomalous content using machine learning.
We measure how anomalous accounts (some of which are automated bots) produce content and interact promoting (false) information.
arXiv Detail & Related papers (2022-03-08T15:02:51Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z) - Personalized Chatbot Trustworthiness Ratings [19.537492400265577]
We envision a personalized rating methodology for chatbots that relies on separate rating modules for each issue.
The method is independent of the specific trust issues and is parametric to the aggregation procedure.
arXiv Detail & Related papers (2020-05-13T22:42:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.