Collaboration with Conversational AI Assistants for UX Evaluation:
Questions and How to Ask them (Voice vs. Text)
- URL: http://arxiv.org/abs/2303.03638v1
- Date: Tue, 7 Mar 2023 03:59:14 GMT
- Title: Collaboration with Conversational AI Assistants for UX Evaluation:
Questions and How to Ask them (Voice vs. Text)
- Authors: Emily Kuang and Ehsan Jahangirzadeh Soure and Mingming Fan and Jian
Zhao and Kristen Shinohara
- Abstract summary: We conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice.
We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics.
The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust.
- Score: 18.884080068561843
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI is promising in assisting UX evaluators with analyzing usability tests,
but its judgments are typically presented as non-interactive visualizations.
Evaluators may have questions about test recordings, but have no way of asking
them. Interactive conversational assistants provide a Q&A dynamic that may
improve analysis efficiency and evaluator autonomy. To understand the full
range of analysis-related questions, we conducted a Wizard-of-Oz design probe
study with 20 participants who interacted with simulated AI assistants via text
or voice. We found that participants asked for five categories of information:
user actions, user mental model, help from the AI assistant, product and task
information, and user demographics. Those who used the text assistant asked
more questions, but the question lengths were similar. The text assistant was
perceived as significantly more efficient, but both were rated equally in
satisfaction and trust. We also provide design considerations for future
conversational AI assistants for UX evaluation.
Related papers
- Can AI Assistance Aid in the Grading of Handwritten Answer Sheets? [2.025468874117372]
This work introduces an AI-assisted grading pipeline.
The pipeline first uses text detection to automatically detect question regions present in a question paper PDF.
Next, it uses SOTA text detection methods to highlight important keywords present in the handwritten answer regions of scanned answer sheets to assist in the grading process.
arXiv Detail & Related papers (2024-08-23T07:00:25Z) - Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward [9.177785129949]
We aim to better understand how specifically developers are using AI assistants.
We carried out a large-scale survey aimed at how AI assistants are used.
arXiv Detail & Related papers (2024-06-11T23:10:43Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Beyond Static Evaluation: A Dynamic Approach to Assessing AI Assistants' API Invocation Capabilities [48.922660354417204]
We propose Automated Dynamic Evaluation (AutoDE) to assess an assistant's API call capability without human involvement.
In our framework, we endeavor to closely mirror genuine human conversation patterns in human-machine interactions.
arXiv Detail & Related papers (2024-03-17T07:34:12Z) - Can AI Assistants Know What They Don't Know? [79.6178700946602]
An AI assistant's refusal to answer questions it does not know is a crucial method for reducing hallucinations and making the assistant truthful.
We construct a model-specific "I don't know" (Idk) dataset for an assistant, which contains its known and unknown questions.
After alignment with Idk datasets, the assistant can refuse to answer most its unknown questions.
arXiv Detail & Related papers (2024-01-24T07:34:55Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - Connecting Humanities and Social Sciences: Applying Language and Speech
Technology to Online Panel Surveys [2.0646127669654835]
We explore the application of language and speech technology to open-ended questions in a Dutch panel survey.
In an experimental wave respondents could choose to answer open questions via speech or keyboard.
We report the errors the ASR system produces and investigate the impact of these errors on downstream analyses.
arXiv Detail & Related papers (2023-02-21T10:52:15Z) - QAConv: Question Answering on Informative Conversations [85.2923607672282]
We focus on informative conversations including business emails, panel discussions, and work channels.
In total, we collect 34,204 QA pairs, including span-based, free-form, and unanswerable questions.
arXiv Detail & Related papers (2021-05-14T15:53:05Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.