"Alexa, what do you do for fun?" Characterizing playful requests with
virtual assistants
- URL: http://arxiv.org/abs/2105.05571v1
- Date: Wed, 12 May 2021 10:48:00 GMT
- Title: "Alexa, what do you do for fun?" Characterizing playful requests with
virtual assistants
- Authors: Chen Shani, Alexander Libov, Sofia Tolmach, Liane Lewin-Eytan, Yoelle
Maarek, Dafna Shahaf
- Abstract summary: We introduce a taxonomy of playful requests rooted in theories of humor and refined by analyzing real-world traffic from Alexa.
Our conjecture is that understanding such utterances will improve user experience with virtual assistants.
- Score: 68.48043257125678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual assistants such as Amazon's Alexa, Apple's Siri, Google Home, and
Microsoft's Cortana, are becoming ubiquitous in our daily lives and
successfully help users in various daily tasks, such as making phone calls or
playing music. Yet, they still struggle with playful utterances, which are not
meant to be interpreted literally. Examples include jokes or absurd requests or
questions such as, "Are you afraid of the dark?", "Who let the dogs out?", or
"Order a zillion gummy bears". Today, virtual assistants often return
irrelevant answers to such utterances, except for hard-coded ones addressed by
canned replies.
To address the challenge of automatically detecting playful utterances, we
first characterize the different types of playful human-virtual assistant
interaction. We introduce a taxonomy of playful requests rooted in theories of
humor and refined by analyzing real-world traffic from Alexa. We then focus on
one node, personification, where users refer to the virtual assistant as a
person ("What do you do for fun?"). Our conjecture is that understanding such
utterances will improve user experience with virtual assistants. We conducted a
Wizard-of-Oz user study and showed that endowing virtual assistant s with the
ability to identify humorous opportunities indeed has the potential to increase
user satisfaction. We hope this work will contribute to the understanding of
the landscape of the problem and inspire novel ideas and techniques towards the
vision of giving virtual assistants a sense of humor.
Related papers
- WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild [88.05964311416717]
We introduce WildVis, an interactive tool that enables fast, versatile, and large-scale conversation analysis.
WildVis provides search and visualization capabilities in the text and embedding spaces based on a list of criteria.
We demonstrate WildVis' utility through three case studies: facilitating misuse research, visualizing and comparing topic distributions across datasets, and characterizing user-specific conversation patterns.
arXiv Detail & Related papers (2024-09-05T17:59:15Z) - Can AI Assistants Know What They Don't Know? [79.6178700946602]
An AI assistant's refusal to answer questions it does not know is a crucial method for reducing hallucinations and making the assistant truthful.
We construct a model-specific "I don't know" (Idk) dataset for an assistant, which contains its known and unknown questions.
After alignment with Idk datasets, the assistant can refuse to answer most its unknown questions.
arXiv Detail & Related papers (2024-01-24T07:34:55Z) - HoloAssist: an Egocentric Human Interaction Dataset for Interactive AI
Assistants in the Real World [48.90399899928823]
This work is part of a broader research effort to develop intelligent agents that can interactively guide humans through performing tasks in the physical world.
We introduce HoloAssist, a large-scale egocentric human interaction dataset.
We present key insights into how human assistants correct mistakes, intervene in the task completion procedure, and ground their instructions to the environment.
arXiv Detail & Related papers (2023-09-29T07:17:43Z) - Rewriting the Script: Adapting Text Instructions for Voice Interaction [39.54213483588498]
We study the limitations of the dominant approach voice assistants take to complex task guidance.
We propose eight ways in which voice assistants can transform written sources into forms that are readily communicated through spoken conversation.
arXiv Detail & Related papers (2023-06-16T17:43:00Z) - Virtual Mouse And Assistant: A Technological Revolution Of Artificial
Intelligence [0.0]
The purpose of this paper is to enhance the performance of the virtual assistant.
Virtual assistants can complete practically any specific smartphone or PC activity that you can complete on your own.
arXiv Detail & Related papers (2023-03-11T05:00:06Z) - Episodic Memory Question Answering [55.83870351196461]
We envision a scenario wherein the human communicates with an AI agent powering an augmented reality device by asking questions.
In order to succeed, the ego AI assistant must construct semantically rich and efficient scene memories.
We introduce a new task - Episodic Memory Question Answering (EMQA)
We show that our choice of episodic scene memory outperforms naive, off-the-centric solutions for the task as well as a host of very competitive baselines.
arXiv Detail & Related papers (2022-05-03T17:28:43Z) - Iconary: A Pictionary-Based Game for Testing Multimodal Communication
with Drawings and Text [70.14613727284741]
Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics, and at times multi-modal gestures.
We investigate these challenges in the context of Iconary, a collaborative game of drawing and guessing based on Pictionary.
We propose models to play Iconary and train them on over 55,000 games between human players.
arXiv Detail & Related papers (2021-12-01T19:41:03Z) - Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI [0.9176056742068814]
We show how embodied AI can manifest as "socially embodied AI"
We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people.
arXiv Detail & Related papers (2021-03-15T00:45:44Z) - Challenges in Supporting Exploratory Search through Voice Assistants [9.861790101853863]
As people get more familiar with voice assistants, they may increase their expectations for more complex tasks.
We outline four challenges in designing voice assistants that can better support exploratory search.
arXiv Detail & Related papers (2020-03-06T01:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.