Multi-Modal Emotion Recognition for Enhanced Requirements Engineering: A
Novel Approach
- URL: http://arxiv.org/abs/2306.01492v1
- Date: Fri, 2 Jun 2023 12:37:51 GMT
- Title: Multi-Modal Emotion Recognition for Enhanced Requirements Engineering: A
Novel Approach
- Authors: Ben Cheng, Chetan Arora, Xiao Liu, Thuong Hoang, Yi Wang, John Grundy
- Abstract summary: This paper introduces a multi-modal emotion recognition platform (MEmoRE) to enhance the requirements engineering process.
MEmoRE leverages state-of-the-art emotion recognition techniques, integrating facial expression, vocal intonation, and textual sentiment analysis.
We aim to pave the way for more empathetic, effective, and successful software development processes.
- Score: 12.906871276817775
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Requirements engineering (RE) plays a crucial role in developing software
systems by bridging the gap between stakeholders' needs and system
specifications. However, effective communication and elicitation of stakeholder
requirements can be challenging, as traditional RE methods often overlook
emotional cues. This paper introduces a multi-modal emotion recognition
platform (MEmoRE) to enhance the requirements engineering process by capturing
and analyzing the emotional cues of stakeholders in real-time. MEmoRE leverages
state-of-the-art emotion recognition techniques, integrating facial expression,
vocal intonation, and textual sentiment analysis to comprehensively understand
stakeholder emotions. This multi-modal approach ensures the accurate and timely
detection of emotional cues, enabling requirements engineers to tailor their
elicitation strategies and improve overall communication with stakeholders. We
further intend to employ our platform for later RE stages, such as requirements
reviews and usability testing. By integrating multi-modal emotion recognition
into requirements engineering, we aim to pave the way for more empathetic,
effective, and successful software development processes. We performed a
preliminary evaluation of our platform. This paper reports on the platform
design, preliminary evaluation, and future development plan as an ongoing
project.
Related papers
- ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - Building Emotional Support Chatbots in the Era of LLMs [64.06811786616471]
We introduce an innovative methodology that synthesizes human insights with the computational prowess of Large Language Models (LLMs)
By utilizing the in-context learning potential of ChatGPT, we generate an ExTensible Emotional Support dialogue dataset, named ExTES.
Following this, we deploy advanced tuning techniques on the LLaMA model, examining the impact of diverse training strategies, ultimately yielding an LLM meticulously optimized for emotional support interactions.
arXiv Detail & Related papers (2023-08-17T10:49:18Z) - Emotions in Requirements Engineering: A Systematic Mapping Study [2.534053759586253]
The purpose of requirements engineering (RE) is to make sure that the expectations and needs of the stakeholders of a software system are met.
Emotional needs can be captured as emotional requirements that represent how the end user should feel when using the system.
This study is motivated by the need to explore and map the literature on emotional requirements.
arXiv Detail & Related papers (2023-05-25T14:24:36Z) - EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversation [34.24557248359872]
We propose an emotional inertia and contagion-driven dependency modeling approach (EmotionIC) for ERC task.
Our EmotionIC consists of three main components, i.e., Identity Masked Multi-Head Attention (IMMHA), Dialogue-based Gated Recurrent Unit (DiaGRU) and Skip-chain Conditional Random Field (SkipCRF)
Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets.
arXiv Detail & Related papers (2023-03-20T13:58:35Z) - Improving Multi-turn Emotional Support Dialogue Generation with
Lookahead Strategy Planning [81.79431311952656]
We propose a novel system MultiESC to provide Emotional Support.
For strategy planning, we propose lookaheads to estimate the future user feedback after using particular strategies.
For user state modeling, MultiESC focuses on capturing users' subtle emotional expressions and understanding their emotion causes.
arXiv Detail & Related papers (2022-10-09T12:23:47Z) - A Unified Framework for Emotion Identification and Generation in
Dialogues [5.102770724328495]
We propose a multi-task framework that jointly identifies the emotion of a given dialogue and generates response in accordance to the identified emotion.
We employ a BERT based network for creating an empathetic system and use a mixed objective function that trains the end-to-end network with both the classification and generation loss.
arXiv Detail & Related papers (2022-05-31T02:58:49Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Reinforcement Learning for Emotional Text-to-Speech Synthesis with
Improved Emotion Discriminability [82.39099867188547]
Emotional text-to-speech synthesis (ETTS) has seen much progress in recent years.
We propose a new interactive training paradigm for ETTS, denoted as i-ETTS.
We formulate an iterative training strategy with reinforcement learning to ensure the quality of i-ETTS optimization.
arXiv Detail & Related papers (2021-04-03T13:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.