Lie-Sensor: A Live Emotion Verifier or a Licensor for Chat Applications
using Emotional Intelligence
- URL: http://arxiv.org/abs/2102.11318v1
- Date: Thu, 11 Feb 2021 02:47:30 GMT
- Title: Lie-Sensor: A Live Emotion Verifier or a Licensor for Chat Applications
using Emotional Intelligence
- Authors: Falguni Patel, NirmalKumar Patel, Santosh Kumar Bharti
- Abstract summary: Live Emotion analysis and verification nullify deceit made to complainers on live chat.
Main concept behind this emotion artificial intelligent verifier is to license or decline message accountability.
For emotion detection, we deployed Convolutional Neural Network (CNN) using a miniXception model.
For text prediction, we selected Support Vector Machine (SVM) natural language processing probability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Veracity is an essential key in research and development of innovative
products. Live Emotion analysis and verification nullify deceit made to
complainers on live chat, corroborate messages of both ends in messaging apps
and promote an honest conversation between users. The main concept behind this
emotion artificial intelligent verifier is to license or decline message
accountability by comparing variegated emotions of chat app users recognized
through facial expressions and text prediction. In this paper, a proposed
emotion intelligent live detector acts as an honest arbiter who distributes
facial emotions into labels namely, Happiness, Sadness, Surprise, and Hate.
Further, it separately predicts a label of messages through text
classification. Finally, it compares both labels and declares the message as a
fraud or a bonafide. For emotion detection, we deployed Convolutional Neural
Network (CNN) using a miniXception model and for text prediction, we selected
Support Vector Machine (SVM) natural language processing probability classifier
due to receiving the best accuracy on training dataset after applying Support
Vector Machine (SVM), Random Forest Classifier, Naive Bayes Classifier, and
Logistic regression.
Related papers
- ADEPT: RL-Aligned Agentic Decoding of Emotion via Evidence Probing Tools -- From Consensus Learning to Ambiguity-Driven Emotion Reasoning [67.22219034602514]
We introduce ADEPT (Agentic Decoding of Emotion via Evidence Probing Tools), a framework that reframes emotion recognition as a multi-turn inquiry process.<n> ADEPT transforms an SLLM into an agent that maintains an evolving candidate emotion set and adaptively invokes dedicated semantic and acoustic probing tools.<n>We show that ADEPT improves primary emotion accuracy in most settings while substantially improving minor emotion characterization.
arXiv Detail & Related papers (2026-02-13T08:33:37Z) - Emotion Transfer with Enhanced Prototype for Unseen Emotion Recognition in Conversation [64.70874527264543]
We introduce the Unseen Emotion Recognition in Conversation (UERC) task for the first time.<n>We propose ProEmoTrans, a prototype-based emotion transfer framework.<n>ProEmoTrans shows promise but still faces key challenges.
arXiv Detail & Related papers (2025-08-27T03:16:16Z) - Empaths at SemEval-2025 Task 11: Retrieval-Augmented Approach to Perceived Emotions Prediction [83.88591755871734]
EmoRAG is a system designed to detect perceived emotions in text for SemEval-2025 Task 11, Subtask A: Multi-label Emotion Detection.<n>We focus on predicting the perceived emotions of the speaker from a given text snippet, labeling it with emotions such as joy, sadness, fear, anger, surprise, and disgust.
arXiv Detail & Related papers (2025-06-04T19:41:24Z) - Improving Speech-based Emotion Recognition with Contextual Utterance Analysis and LLMs [2.8728982844941178]
Speech Emotion Recognition (SER) focuses on identifying emotional states from spoken language.
We propose a novel approach that first refines all available transcriptions to ensure data reliability.
We then segment each complete conversation into smaller dialogues and use these dialogues as context to predict the emotion of the target utterance within the dialogue.
arXiv Detail & Related papers (2024-10-27T04:23:34Z) - Towards Empathetic Conversational Recommender Systems [77.53167131692]
We propose an empathetic conversational recommender (ECR) framework.
ECR contains two main modules: emotion-aware item recommendation and emotion-aligned response generation.
Our experiments on the ReDial dataset validate the efficacy of our framework in enhancing recommendation accuracy and improving user satisfaction.
arXiv Detail & Related papers (2024-08-30T15:43:07Z) - Utilizing Speech Emotion Recognition and Recommender Systems for
Negative Emotion Handling in Therapy Chatbots [0.0]
This paper proposes an approach to enhance therapy chatbots with auditory perception, enabling them to understand users' feelings and provide human-like empathy.
The proposed method incorporates speech emotion recognition (SER) techniques using CNN models and the ShEMO dataset.
To provide a more immersive and empathetic user experience, a text-to-speech model called GlowTTS is integrated.
arXiv Detail & Related papers (2023-11-18T16:35:55Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - WEARS: Wearable Emotion AI with Real-time Sensor data [0.8740570557632509]
We propose a system to predict user emotion using smartwatch sensors.
We design a framework to collect ground truth in real-time utilizing a mix of English and regional language-based videos.
We also did an ablation study to understand the impact of features including Heart Rate, Accelerometer, and Gyroscope sensor data on mood.
arXiv Detail & Related papers (2023-08-22T11:03:00Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation [6.557082555839738]
We propose an emotion-aware transformer encoder for capturing the emotional quotient in the user utterance.
An emotion detector module determines the affective state of the user in the initial phase.
A novel transformer encoder is proposed that adds and normalizes the word embedding with emotion embedding.
arXiv Detail & Related papers (2022-04-24T17:05:36Z) - Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [70.98130990040228]
We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
arXiv Detail & Related papers (2022-03-23T08:04:30Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - AdCOFE: Advanced Contextual Feature Extraction in Conversations for
emotion classification [0.29360071145551075]
The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues.
Experiments on the Emotion recognition in conversations dataset show that AdCOFE is beneficial in capturing emotions in conversations.
arXiv Detail & Related papers (2021-04-09T17:58:19Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.