FaceChat: An Emotion-Aware Face-to-face Dialogue Framework
- URL: http://arxiv.org/abs/2303.07316v1
- Date: Wed, 8 Mar 2023 20:45:37 GMT
- Title: FaceChat: An Emotion-Aware Face-to-face Dialogue Framework
- Authors: Deema Alnuhait, Qingyang Wu, Zhou Yu
- Abstract summary: FaceChat is a web-based dialogue framework that enables emotionally-sensitive and face-to-face conversations.
System has a wide range of potential applications, including counseling, emotional support, and personalized customer service.
- Score: 58.67608580694849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current dialogue systems like ChatGPT have made significant
advancements in text-based interactions, they often overlook the potential of
other modalities in enhancing the overall user experience. We present FaceChat,
a web-based dialogue framework that enables emotionally-sensitive and
face-to-face conversations. By seamlessly integrating cutting-edge technologies
in natural language processing, computer vision, and speech processing,
FaceChat delivers a highly immersive and engaging user experience. FaceChat
framework has a wide range of potential applications, including counseling,
emotional support, and personalized customer service. The system is designed to
be simple and flexible as a platform for future researchers to advance the
field of multimodal dialogue systems. The code is publicly available at
https://github.com/qywu/FaceChat.
Related papers
- AVIN-Chat: An Audio-Visual Interactive Chatbot System with Emotional State Tuning [9.989693906734535]
AVIN-Chat allows users to have face-to-face conversations with 3D avatars in real-time.
The proposed AVIN-Chat emotionally speaks and expresses according to the user's emotional state.
arXiv Detail & Related papers (2024-08-15T22:45:53Z) - Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation [55.043492250775294]
We introduce a novel Face-to-Face spoken dialogue model.
It processes audio-visual speech from user input and generates audio-visual speech as the response.
We also introduce MultiDialog, the first large-scale multimodal spoken dialogue corpus.
arXiv Detail & Related papers (2024-06-12T04:48:36Z) - SAPIEN: Affective Virtual Agents Powered by Large Language Models [2.423280064224919]
We introduce SAPIEN, a platform for high-fidelity virtual agents driven by large language models.
The platform allows users to customize their virtual agent's personality, background, and conversation premise.
After the virtual meeting, the user can choose to get the conversation analyzed and receive actionable feedback on their communication skills.
arXiv Detail & Related papers (2023-08-06T05:13:16Z) - InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT
Beyond Language [82.92236977726655]
InternGPT stands for textbfinteraction, textbfnonverbal, and textbfchatbots.
We present an interactive visual framework named InternGPT, or iGPT for short.
arXiv Detail & Related papers (2023-05-09T17:58:34Z) - ChatLLM Network: More brains, More intelligence [42.65167827451101]
We propose ChatLLM network that allows multiple dialogue-based language models to interact, provide feedback, and think together.
We show that our network attains significant improvements in problem-solving, leading to observable progress amongst each member.
arXiv Detail & Related papers (2023-04-24T08:29:14Z) - Emotionally Enhanced Talking Face Generation [52.07451348895041]
We build a talking face generation framework conditioned on a categorical emotion to generate videos with appropriate expressions.
We show that our model can adapt to arbitrary identities, emotions, and languages.
Our proposed framework is equipped with a user-friendly web interface with a real-time experience for talking face generation with emotions.
arXiv Detail & Related papers (2023-03-21T02:33:27Z) - Improving Multi-turn Emotional Support Dialogue Generation with
Lookahead Strategy Planning [81.79431311952656]
We propose a novel system MultiESC to provide Emotional Support.
For strategy planning, we propose lookaheads to estimate the future user feedback after using particular strategies.
For user state modeling, MultiESC focuses on capturing users' subtle emotional expressions and understanding their emotion causes.
arXiv Detail & Related papers (2022-10-09T12:23:47Z) - A Unified Framework for Emotion Identification and Generation in
Dialogues [5.102770724328495]
We propose a multi-task framework that jointly identifies the emotion of a given dialogue and generates response in accordance to the identified emotion.
We employ a BERT based network for creating an empathetic system and use a mixed objective function that trains the end-to-end network with both the classification and generation loss.
arXiv Detail & Related papers (2022-05-31T02:58:49Z) - Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [70.98130990040228]
We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
arXiv Detail & Related papers (2022-03-23T08:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.