The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting
User Questions About Human or Non-Human Identity
- URL: http://arxiv.org/abs/2106.02692v1
- Date: Fri, 4 Jun 2021 20:04:33 GMT
- Title: The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting
User Questions About Human or Non-Human Identity
- Authors: David Gros, Yu Li, Zhou Yu
- Abstract summary: We aim to understand how system designers might allow their systems to confirm its non-human identity.
We collect over 2,500 phrasings related to the intent of Are you a robot?"
We compare classifiers to recognize the intent and discuss the precision/recall and model complexity tradeoffs.
- Score: 41.43519695929595
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Humans are increasingly interacting with machines through language, sometimes
in contexts where the user may not know they are talking to a machine (like
over the phone or a text chatbot). We aim to understand how system designers
and researchers might allow their systems to confirm its non-human identity. We
collect over 2,500 phrasings related to the intent of ``Are you a robot?". This
is paired with over 2,500 adversarially selected utterances where only
confirming the system is non-human would be insufficient or disfluent. We
compare classifiers to recognize the intent and discuss the precision/recall
and model complexity tradeoffs. Such classifiers could be integrated into
dialog systems to avoid undesired deception. We then explore how both a
generative research model (Blender) as well as two deployed systems (Amazon
Alexa, Google Assistant) handle this intent, finding that systems often fail to
confirm their non-human identity. Finally, we try to understand what a good
response to the intent would be, and conduct a user study to compare the
important aspects when responding to this intent.
Related papers
- Revisiting Proprioceptive Sensing for Articulated Object Manipulation [1.4772379476524404]
We create a system that uses proprioceptive sensing to open cabinets with a position-controlled robot and a parallel gripper.
We perform a qualitative evaluation of this system, where we find that slip between the gripper and handle limits the performance.
This poses the question: should we make more use of proprioceptive information during contact in articulated object manipulation systems, or is it not worth the added complexity, and can we manage with vision alone?
arXiv Detail & Related papers (2023-05-16T16:31:10Z) - The System Model and the User Model: Exploring AI Dashboard Design [79.81291473899591]
We argue that sophisticated AI systems should have dashboards, just like all other complicated devices.
We conjecture that, for many systems, the two most important models will be of the user and of the system itself.
Finding ways to identify, interpret, and display these two models should be a core part of interface research for AI.
arXiv Detail & Related papers (2023-05-04T00:22:49Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - Examining Audio Communication Mechanisms for Supervising Fleets of
Agricultural Robots [2.76240219662896]
We develop a simulation platform where agbots are deployed across a field, randomly encounter failures, and call for help from the operator.
As the agbots report errors, various audio communication mechanisms are tested to convey which robot failed and what type of failure occurs.
A user study was conducted to test three audio communication methods: earcons, single-phrase commands, and full sentence communication.
arXiv Detail & Related papers (2022-08-22T17:19:20Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - One System to Rule them All: a Universal Intent Recognition System for
Customer Service Chatbots [0.0]
We propose the development of a universal intent recognition system.
It is trained to recognize a selected group of 11 intents common in 28 different chatbots.
The proposed system is able to discriminate between those universal intents with a balanced accuracy up to 80.4%.
arXiv Detail & Related papers (2021-12-15T16:45:55Z) - Talk-to-Resolve: Combining scene understanding and spatial dialogue to
resolve granular task ambiguity for a collocated robot [15.408128612723882]
The utility of collocating robots largely depends on the easy and intuitive interaction mechanism with the human.
We present a system called Talk-to-Resolve (TTR) that enables a robot to initiate a coherent dialogue exchange with the instructor.
Our system can identify the stalemate and resolve them with appropriate dialogue exchange with 82% accuracy.
arXiv Detail & Related papers (2021-11-22T10:42:59Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.