UniEmo: Unifying Emotional Understanding and Generation with Learnable Expert Queries
- URL: http://arxiv.org/abs/2507.23372v1
- Date: Thu, 31 Jul 2025 09:39:27 GMT
- Title: UniEmo: Unifying Emotional Understanding and Generation with Learnable Expert Queries
- Authors: Yijie Zhu, Lingsen Zhang, Zitong Yu, Rui Shao, Tao Tan, Liqiang Nie,
- Abstract summary: We propose a unified framework that seamlessly integrates emotional understanding and generation.<n>We show that UniEmo significantly outperforms state-of-the-art methods in both emotional understanding and generation tasks.
- Score: 61.5273479616832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotional understanding and generation are often treated as separate tasks, yet they are inherently complementary and can mutually enhance each other. In this paper, we propose the UniEmo, a unified framework that seamlessly integrates these two tasks. The key challenge lies in the abstract nature of emotions, necessitating the extraction of visual representations beneficial for both tasks. To address this, we propose a hierarchical emotional understanding chain with learnable expert queries that progressively extracts multi-scale emotional features, thereby serving as a foundational step for unification. Simultaneously, we fuse these expert queries and emotional representations to guide the diffusion model in generating emotion-evoking images. To enhance the diversity and fidelity of the generated emotional images, we further introduce the emotional correlation coefficient and emotional condition loss into the fusion process. This step facilitates fusion and alignment for emotional generation guided by the understanding. In turn, we demonstrate that joint training allows the generation component to provide implicit feedback to the understanding part. Furthermore, we propose a novel data filtering algorithm to select high-quality and diverse emotional images generated by the well-trained model, which explicitly feedback into the understanding part. Together, these generation-driven dual feedback processes enhance the model's understanding capacity. Extensive experiments show that UniEmo significantly outperforms state-of-the-art methods in both emotional understanding and generation tasks. The code for the proposed method is available at https://github.com/JiuTian-VL/UniEmo.
Related papers
- CoEmoGen: Towards Semantically-Coherent and Scalable Emotional Image Content Generation [3.5418954219513625]
Emotional Image Content Generation (EICG) aims to generate semantically clear and emotionally faithful images based on given emotion categories.<n>We propose CoEmoGen, a novel pipeline notable for its semantic coherence and high scalability.<n>To intuitively showcase scalability, we curate EmoArt, a large-scale dataset of emotionally evocative artistic images.
arXiv Detail & Related papers (2025-08-05T15:04:34Z) - Emotion-Qwen: Training Hybrid Experts for Unified Emotion and General Vision-Language Understanding [24.884935271771624]
We present Emotion-Qwen, a tailored framework designed to enhance both emotion understanding and general vision-language reasoning.<n>Emotion-Qwen incorporates a sophisticated Hybrid based on the Mixture of Experts (MoE) paradigm, which dynamically routes inputs to balance emotion-specific and general-purpose processing.<n>We construct the Video Emotion Reasoning (VER) dataset, comprising more than 40K bilingual video clips with fine-grained descriptive annotations, to further enrich Emotion-Qwen's emotional reasoning capability.
arXiv Detail & Related papers (2025-05-10T16:15:26Z) - Disentangle Identity, Cooperate Emotion: Correlation-Aware Emotional Talking Portrait Generation [63.94836524433559]
DICE-Talk is a framework for disentangling identity with emotion and cooperating emotions with similar characteristics.<n>We develop a disentangled emotion embedder that jointly models audio-visual emotional cues through cross-modal attention.<n>Second, we introduce a correlation-enhanced emotion conditioning module with learnable Emotion Banks.<n>Third, we design an emotion discrimination objective that enforces affective consistency during the diffusion process.
arXiv Detail & Related papers (2025-04-25T05:28:21Z) - An Audio-Visual Fusion Emotion Generation Model Based on Neuroanatomical Alignment [15.98131469205444]
We introduce a novel framework named Audio-Visual Fusion for Brain-like Emotion Learning(AVF-BEL)<n>In contrast to conventional brain-inspired emotion learning methods, this approach improves the audio-visual emotion fusion and generation model.<n>The experimental results indicate a significant improvement in the similarity of the audio-visual fusion emotion learning generation model.
arXiv Detail & Related papers (2025-02-21T14:26:58Z) - EmoLLM: Multimodal Emotional Understanding Meets Large Language Models [61.179731667080326]
Multi-modal large language models (MLLMs) have achieved remarkable performance on objective multimodal perception tasks.
But their ability to interpret subjective, emotionally nuanced multimodal content remains largely unexplored.
EmoLLM is a novel model for multimodal emotional understanding, incorporating with two core techniques.
arXiv Detail & Related papers (2024-06-24T08:33:02Z) - Enhancing Emotional Generation Capability of Large Language Models via Emotional Chain-of-Thought [50.13429055093534]
Large Language Models (LLMs) have shown remarkable performance in various emotion recognition tasks.
We propose the Emotional Chain-of-Thought (ECoT) to enhance the performance of LLMs on various emotional generation tasks.
arXiv Detail & Related papers (2024-01-12T16:42:10Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.