DialogueLLM: Context and Emotion Knowledge-Tuned Large Language Models
for Emotion Recognition in Conversations
- URL: http://arxiv.org/abs/2310.11374v4
- Date: Wed, 17 Jan 2024 02:44:56 GMT
- Title: DialogueLLM: Context and Emotion Knowledge-Tuned Large Language Models
for Emotion Recognition in Conversations
- Authors: Yazhou Zhang, Mengyao Wang, Youxi Wu, Prayag Tiwari, Qiuchi Li, Benyou
Wang, Jing Qin
- Abstract summary: Large language models (LLMs) have shown extraordinary efficacy across numerous downstream natural language processing (NLP) tasks.
We propose DialogueLLM, a context and emotion knowledge tuned LLM that is obtained by fine-tuning LLaMA models.
We offer a comprehensive evaluation of our proposed model on three benchmarking emotion recognition in conversations datasets.
- Score: 28.15933355881604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) and their variants have shown extraordinary
efficacy across numerous downstream natural language processing (NLP) tasks,
which has presented a new vision for the development of NLP. Despite their
remarkable performance in natural language generating (NLG), LLMs lack a
distinct focus on the emotion understanding domain. As a result, using LLMs for
emotion recognition may lead to suboptimal and inadequate precision. Another
limitation of LLMs is that they are typical trained without leveraging
multi-modal information. To overcome these limitations, we propose DialogueLLM,
a context and emotion knowledge tuned LLM that is obtained by fine-tuning LLaMA
models with 13,638 multi-modal (i.e., texts and videos) emotional dialogues.
The visual information is considered as the supplementary knowledge to
construct high-quality instructions. We offer a comprehensive evaluation of our
proposed model on three benchmarking emotion recognition in conversations (ERC)
datasets and compare the results against the SOTA baselines and other SOTA
LLMs. Additionally, DialogueLLM-7B can be easily trained using LoRA on a 40GB
A100 GPU in 5 hours, facilitating reproducibility for other researchers.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.