Using Voice and Biofeedback to Predict User Engagement during
Requirements Interviews
- URL: http://arxiv.org/abs/2104.02410v1
- Date: Tue, 6 Apr 2021 10:34:36 GMT
- Title: Using Voice and Biofeedback to Predict User Engagement during
Requirements Interviews
- Authors: Alessio Ferrari, Thaide Huichapa, Paola Spoletini, Nicole Novielli,
Davide Fucci, Daniela Girardi
- Abstract summary: We propose to utilize biometric data, in terms of physiological and voice features, to complement interviews with information about user engagement.
We evaluate our approach by interviewing users while gathering their physiological data using an Empatica E4 wristband.
Our results show that we can predict users' engagement by training supervised machine learning algorithms on biometric data.
- Score: 11.277063517143565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing users engagement is crucial for gathering feedback about the
features of a software product. In a market-driven context, current approaches
to collect and analyze users feedback are based on techniques leveraging
information extracted from product reviews and social media. These approaches
are hardly applicable in bespoke software development, or in contexts in which
one needs to gather information from specific users. In such cases, companies
need to resort to face-to-face interviews to get feedback on their products. In
this paper, we propose to utilize biometric data, in terms of physiological and
voice features, to complement interviews with information about the engagement
of the user on the discussed product-relevant topics. We evaluate our approach
by interviewing users while gathering their physiological data (i.e.,
biofeedback) using an Empatica E4 wristband, and capturing their voice through
the default audio-recorder of a common laptop. Our results show that we can
predict users' engagement by training supervised machine learning algorithms on
biometric data, and that voice features alone can be sufficiently effective.
The performance of the prediction algorithms is maximised when pre-processing
the training data with the synthetic minority oversampling technique (SMOTE).
The results of our work suggest that biofeedback and voice analysis can be used
to facilitate prioritization of requirements oriented to product improvement,
and to steer the interview based on users' engagement. Furthermore, the usage
of voice features can be particularly helpful for emotion-aware requirements
elicitation in remote communication, either performed by human analysts or
voice-based chatbots.
Related papers
- Predictive Speech Recognition and End-of-Utterance Detection Towards Spoken Dialog Systems [55.99999020778169]
We study a function that can predict the forthcoming words and estimate the time remaining until the end of an utterance.
We develop a cross-attention-based algorithm that incorporates both acoustic and linguistic information.
Results demonstrate the proposed model's ability to predict upcoming words and estimate future EOU events up to 300ms prior to the actual EOU.
arXiv Detail & Related papers (2024-09-30T06:29:58Z) - InsightPulse: An IoT-based System for User Experience Interview Analysis [1.7533975800877244]
This paper introduces InsightPulse, an Internet of Things (IoT)-based hardware and software system designed to streamline and enhance the UX interview process through speech analysis and Artificial Intelligence.
InsightPulse provides real-time support during user interviews by automatically identifying and highlighting key discussion points, proactively suggesting follow-up questions, and generating thematic summaries.
The system features a robust backend analytics dashboard that simplifies the post-interview review process, thus facilitating the quick extraction of actionable insights and enhancing overall UX research efficiency.
arXiv Detail & Related papers (2024-09-23T21:39:34Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Talk the Walk: Synthetic Data Generation for Conversational Music
Recommendation [62.019437228000776]
We present TalkWalk, which generates realistic high-quality conversational data by leveraging encoded expertise in widely available item collections.
We generate over one million diverse conversations in a human-collected dataset.
arXiv Detail & Related papers (2023-01-27T01:54:16Z) - Dehumanizing Voice Technology: Phonetic & Experiential Consequences of
Restricted Human-Machine Interaction [0.0]
We show that requests lead to an in-crease in phonetic convergence and lower phonetic latency, and ultimately a more natural task experience for consumers.
We provide evidence that altering the required input to initiate a conversation with smart objects provokes systematic changes both in terms of consumers' subjective experience and objective phonetic changes in the human voice.
arXiv Detail & Related papers (2021-11-02T22:49:25Z) - Building a Noisy Audio Dataset to Evaluate Machine Learning Approaches
for Automatic Speech Recognition Systems [0.0]
This work aims to present the process of building a dataset of noisy audios, in a specific case of degenerated audios due to interference.
We also present initial results of a classifier that uses such data for evaluation, indicating the benefits of using this dataset in the recognizer's training process.
arXiv Detail & Related papers (2021-10-04T13:08:53Z) - Experiences with the Introduction of AI-based Tools for Moderation
Automation of Voice-based Participatory Media Forums [0.5243067689245634]
We introduce AI tools to filter out blank or noisy audios, use speech recognition to transcribe the voice messages in text, and use natural language processing techniques to extract metadata from the audio transcripts.
We present our findings in terms of the time and cost-savings made through the introduction of these tools, and describe the feedback of the moderators towards the acceptability of AI-based automation in their workflow.
Our work forms a case-study in the use of AI for automation of several routine tasks, and can be especially relevant for other researchers and practitioners involved with the use of voice-based technologies in developing regions of the
arXiv Detail & Related papers (2021-08-09T17:50:33Z) - An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and
Separation [57.68765353264689]
Speech enhancement and speech separation are two related tasks.
Traditionally, these tasks have been tackled using signal processing and machine learning techniques.
Deep learning has been exploited to achieve strong performance.
arXiv Detail & Related papers (2020-08-21T17:24:09Z) - Leveraging Multimodal Behavioral Analytics for Automated Job Interview
Performance Assessment and Feedback [0.5872014229110213]
Behavioral cues play a significant part in human communication and cognitive perception.
We propose a multimodal analytical framework that analyzes the candidate in an interview scenario.
We use these multimodal data sources to construct a composite representation, which is used for training machine learning classifiers to predict the class labels.
arXiv Detail & Related papers (2020-06-14T14:20:42Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.