Conversational User Interfaces for Blind Knowledge Workers: A Case Study
- URL: http://arxiv.org/abs/2006.07519v2
- Date: Tue, 13 Oct 2020 21:06:00 GMT
- Title: Conversational User Interfaces for Blind Knowledge Workers: A Case Study
- Authors: Kyle Dent and Kalai Ramea
- Abstract summary: Modern trends in interface design for office equipment using controls on touch surfaces create greater obstacles for blind and visually impaired users.
We present a case study of our work to develop a conversational user interface for accessibility for multifunction printers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern trends in interface design for office equipment using controls on
touch surfaces create greater obstacles for blind and visually impaired users
and contribute to an environment of dependency in work settings. We believe
that \textit{conversational user interfaces} (CUIs) offer a reasonable
alternative to touchscreen interactions enabling more access and most
importantly greater independence for blind knowledge workers. We present a case
study of our work to develop a conversational user interface for accessibility
for multifunction printers. We also describe our approach to conversational
interfaces in general, which emphasizes task-based collaborative interactions
between people and intelligent agents, and we detail the specifics of the
solution we created for multifunction printers. To guide our design, we worked
with a group of blind and visually impaired individuals starting with focus
group sessions to ascertain the challenges our target users face in their
professional lives. We followed our technology development with a user study to
assess the solution and direct our future efforts. We present our findings and
conclusions from the study.
Related papers
- Navigating the Unknown: A Chat-Based Collaborative Interface for Personalized Exploratory Tasks [35.09558253658275]
This paper introduces the Collaborative Assistant for Personalized Exploration (CARE)
CARE is a system designed to enhance personalization in exploratory tasks by combining a multi-agent LLM framework with a structured user interface.
Our findings highlight CARE's potential to transform LLM-based systems from passive information retrievers to proactive partners in personalized problem-solving and exploration.
arXiv Detail & Related papers (2024-10-31T15:30:55Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Supporting Experts with a Multimodal Machine-Learning-Based Tool for
Human Behavior Analysis of Conversational Videos [40.30407535831779]
We developed Providence, a visual-programming-based tool based on design considerations derived from a formative study with experts.
It enables experts to combine various machine learning algorithms to capture human behavioral cues without writing code.
Our study showed its preferable usability and satisfactory output with less cognitive load imposed in accomplishing scene search tasks of conversations.
arXiv Detail & Related papers (2024-02-17T00:27:04Z) - Enhancing HOI Detection with Contextual Cues from Large Vision-Language Models [56.257840490146]
ConCue is a novel approach for improving visual feature extraction in HOI detection.
We develop a transformer-based feature extraction module with a multi-tower architecture that integrates contextual cues into both instance and interaction detectors.
arXiv Detail & Related papers (2023-11-26T09:11:32Z) - UPREVE: An End-to-End Causal Discovery Benchmarking System [24.303130018154388]
We present Upload, PREprocess, Visualize, and Evaluate (UPREVE), a user-friendly web-based graphical user interface (GUI)
UPREVE allows users to run multiple algorithms simultaneously, visualize causal relationships, and evaluate the accuracy of learned causal graphs.
Our proposed solution aims to make causal discovery more accessible and user-friendly, enabling users to gain valuable insights for better decision-making.
arXiv Detail & Related papers (2023-07-25T18:30:41Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z) - Bringing Cognitive Augmentation to Web Browsing Accessibility [69.62988485669146]
We explore opportunities brought by cognitive augmentation to provide a more natural and accessible web browsing experience.
We develop a conceptual framework for supporting BVIP conversational web browsing needs.
We describe our early work and prototype that leverages that consider structural and content features only.
arXiv Detail & Related papers (2020-12-07T14:40:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.