Challenges in Supporting Exploratory Search through Voice Assistants
- URL: http://arxiv.org/abs/2003.02986v1
- Date: Fri, 6 Mar 2020 01:10:39 GMT
- Title: Challenges in Supporting Exploratory Search through Voice Assistants
- Authors: Xiao Ma, Ariel Liu
- Abstract summary: As people get more familiar with voice assistants, they may increase their expectations for more complex tasks.
We outline four challenges in designing voice assistants that can better support exploratory search.
- Score: 9.861790101853863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Voice assistants have been successfully adopted for simple, routine tasks,
such as asking for the weather or setting an alarm. However, as people get more
familiar with voice assistants, they may increase their expectations for more
complex tasks, such as exploratory search-- e.g., "What should I do when I
visit Paris with kids? Oh, and ideally not too expensive." Compared to simple
search tasks such as "How tall is the Eiffel Tower?", which can be answered
with a single-shot answer, the response to exploratory search is more nuanced,
especially through voice-based assistants. In this paper, we outline four
challenges in designing voice assistants that can better support exploratory
search: addressing situationally induced impairments; working with mixed-modal
interactions; designing for diverse populations; and meeting users'
expectations and gaining their trust. Addressing these challenges is important
for developing more "intelligent" voice-based personal assistants.
Related papers
- Can AI Assistants Know What They Don't Know? [79.6178700946602]
An AI assistant's refusal to answer questions it does not know is a crucial method for reducing hallucinations and making the assistant truthful.
We construct a model-specific "I don't know" (Idk) dataset for an assistant, which contains its known and unknown questions.
After alignment with Idk datasets, the assistant can refuse to answer most its unknown questions.
arXiv Detail & Related papers (2024-01-24T07:34:55Z) - Voice-Based Smart Assistant System for Vehicles using RASA [0.0]
This paper focuses on the development of a voice-based smart assistance application for vehicles based on the RASA framework.
The smart assistant provides functionalities like navigation, communication via calls, getting weather forecasts and the latest news updates, and music that are completely voice-based in nature.
arXiv Detail & Related papers (2023-12-04T05:48:18Z) - Advancing the Search Frontier with AI Agents [6.839870353268828]
Complex search tasks require more than support for rudimentary fact finding or re-finding.
The recent emergence of generative artificial intelligence (AI) has the potential to offer further assistance to searchers.
This article explores these issues and how AI agents are advancing the frontier of search system capabilities.
arXiv Detail & Related papers (2023-11-02T13:43:22Z) - Follow-on Question Suggestion via Voice Hints for Voice Assistants [29.531005346608215]
We tackle the novel task of suggesting questions with compact and natural voice hints to allow users to ask follow-up questions.
We propose baselines and an approach using sequence-to-sequence Transformers to generate spoken hints from a list of questions.
Results show that a naive approach of concatenating suggested questions creates poor voice hints.
arXiv Detail & Related papers (2023-10-25T22:22:18Z) - Rewriting the Script: Adapting Text Instructions for Voice Interaction [39.54213483588498]
We study the limitations of the dominant approach voice assistants take to complex task guidance.
We propose eight ways in which voice assistants can transform written sources into forms that are readily communicated through spoken conversation.
arXiv Detail & Related papers (2023-06-16T17:43:00Z) - Prompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot
Task Generalization [61.60501633397704]
We investigate the emergent abilities of the recently proposed web-scale speech model Whisper, by adapting it to unseen tasks with prompt engineering.
We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts.
Experiments show that our proposed prompts improve performance by 10% to 45% on the three zero-shot tasks, and even outperform SotA supervised models on some datasets.
arXiv Detail & Related papers (2023-05-18T16:32:58Z) - Collaboration with Conversational AI Assistants for UX Evaluation:
Questions and How to Ask them (Voice vs. Text) [18.884080068561843]
We conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice.
We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics.
The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust.
arXiv Detail & Related papers (2023-03-07T03:59:14Z) - AVLEN: Audio-Visual-Language Embodied Navigation in 3D Environments [60.98664330268192]
We present AVLEN -- an interactive agent for Audio-Visual-Language Embodied Navigation.
The goal of AVLEN is to localize an audio event via navigating the 3D visual world.
To realize these abilities, AVLEN uses a multimodal hierarchical reinforcement learning backbone.
arXiv Detail & Related papers (2022-10-14T16:35:06Z) - Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car
Commands [48.155806720847394]
In-car smart assistants should be able to process general as well as car-related commands.
Most datasets are in major languages, such as English and Chinese.
We propose Cantonese Audio-Visual Speech Recognition for In-car Commands.
arXiv Detail & Related papers (2022-07-06T13:31:56Z) - "Alexa, what do you do for fun?" Characterizing playful requests with
virtual assistants [68.48043257125678]
We introduce a taxonomy of playful requests rooted in theories of humor and refined by analyzing real-world traffic from Alexa.
Our conjecture is that understanding such utterances will improve user experience with virtual assistants.
arXiv Detail & Related papers (2021-05-12T10:48:00Z) - Brain-inspired Search Engine Assistant based on Knowledge Graph [53.89429854626489]
DeveloperBot is a brain-inspired search engine assistant named on knowledge graph.
It constructs a multi-layer query graph by splitting a complex multi-constraint query into several ordered constraints.
It then models the constraint reasoning process as subgraph search process inspired by the spreading activation model of cognitive science.
arXiv Detail & Related papers (2020-12-25T06:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.