ViGoEmotions: A Benchmark Dataset For Fine-grained Emotion Detection on Vietnamese Texts
- URL: http://arxiv.org/abs/2602.08371v1
- Date: Mon, 09 Feb 2026 08:10:40 GMT
- Title: ViGoEmotions: A Benchmark Dataset For Fine-grained Emotion Detection on Vietnamese Texts
- Authors: Hung Quang Tran, Nam Tien Pham, Son T. Luu, Kiet Van Nguyen,
- Abstract summary: This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments.<n>To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated.
- Score: 5.670093510042766
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Emotion classification plays a significant role in emotion prediction and harmful content detection. Recent advancements in NLP, particularly through large language models (LLMs), have greatly improved outcomes in this field. This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments in which each comment is classified into 27 fine-grained distinct emotions. To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated under three preprocessing strategies: preserving original emojis with rule-based normalization, converting emojis into textual descriptions, and applying ViSoLex, a model-based lexical normalization system. Results show that converting emojis into text often improves the performance of several BERT-based baselines, while preserving emojis yields the best results for ViSoBERT and CafeBERT. In contrast, removing emojis generally leads to lower performance. ViSoBERT achieved the highest Macro F1-score of 61.50% and Weighted F1-score of 63.26%. Strong performance was also observed from CafeBERT and PhoBERT. These findings highlight that while the proposed corpus can support diverse architectures effectively, preprocessing strategies and annotation quality remain key factors influencing downstream performance.
Related papers
- Understanding Textual Emotion Through Emoji Prediction [0.0]
This project explores emoji prediction from short text sequences using four deep learning architectures.<n>BERT achieves the highest overall performance due to its pre-training advantage.<n>CNN demonstrates superior efficacy on rare emoji classes.
arXiv Detail & Related papers (2025-08-13T22:17:00Z) - When Words Smile: Generating Diverse Emotional Facial Expressions from Text [77.1867389815291]
We introduce an end-to-end text-to-expression model that explicitly focuses on emotional dynamics.<n>Our model learns expressive facial variations in a continuous latent space and generates expressions that are diverse, fluid, and emotionally coherent.
arXiv Detail & Related papers (2024-12-03T15:39:05Z) - ASCIIEval: Benchmarking Models' Visual Perception in Text Strings via ASCII Art [83.95594027644124]
We frame the problem as a recognition task, and construct a novel benchmark, ASCIIEval.<n>It covers over 3K samples with an elaborate categorization tree, along with a training set for further enhancement.<n>Given textual input, language models shows their visual perception ability on ASCII art concepts.<n>For image inputs, we reveal that open-source MLLMs suffer from a trade-off between fine-grained text recognition and collective visual perception.
arXiv Detail & Related papers (2024-10-02T16:46:01Z) - Semantics Preserving Emoji Recommendation with Large Language Models [47.94761630160614]
Existing emoji recommendation methods are primarily evaluated based on their ability to match the exact emoji a user chooses in the original text.
We propose a new semantics preserving evaluation framework for emoji recommendation, which measures a model's ability to recommend emojis that maintain the semantic consistency with the user's text.
arXiv Detail & Related papers (2024-09-16T22:27:46Z) - GenAI-Bench: Evaluating and Improving Compositional Text-to-Visual Generation [103.3465421081531]
VQAScore is a metric measuring the likelihood that a VQA model views an image as accurately depicting the prompt.
Ranking by VQAScore is 2x to 3x more effective than other scoring methods like PickScore, HPSv2, and ImageReward.
We release a new GenAI-Rank benchmark with over 40,000 human ratings to evaluate scoring metrics on ranking images generated from the same prompt.
arXiv Detail & Related papers (2024-06-19T18:00:07Z) - EmojiLM: Modeling the New Emoji Language [44.23076273155259]
We develop a text-emoji parallel corpus, Text2Emoji, from a large language model.
Based on the parallel corpus, we distill a sequence-to-sequence model, EmojiLM, which is specialized in the text-emoji bidirectional translation.
Our proposed model outperforms strong baselines and the parallel corpus benefits emoji-related downstream tasks.
arXiv Detail & Related papers (2023-11-03T07:06:51Z) - Emoji Prediction in Tweets using BERT [0.0]
We propose a transformer-based approach for emoji prediction using BERT, a widely-used pre-trained language model.
We fine-tuned BERT on a large corpus of text (tweets) containing both text and emojis to predict the most appropriate emoji for a given text.
Our experimental results demonstrate that our approach outperforms several state-of-the-art models in predicting emojis with an accuracy of over 75 percent.
arXiv Detail & Related papers (2023-07-05T06:38:52Z) - A New Generation of Perspective API: Efficient Multilingual
Character-level Transformers [66.9176610388952]
We present the fundamentals behind the next version of the Perspective API from Google Jigsaw.
At the heart of the approach is a single multilingual token-free Charformer model.
We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings.
arXiv Detail & Related papers (2022-02-22T20:55:31Z) - Emoji-aware Co-attention Network with EmoGraph2vec Model for Sentiment
Anaylsis [9.447106020795292]
We propose a method to learn emoji representations called EmoGraph2vec and design an emoji-aware co-attention network.
Our model designs a co-attention mechanism to incorporate the text and emojis, and integrates a squeeze-and-excitation block into a convolutional neural network.
Experimental results show that the proposed model can outperform several baselines for sentiment analysis on benchmark datasets.
arXiv Detail & Related papers (2021-10-27T08:01:10Z) - Emoji Prediction: Extensions and Benchmarking [30.642840676899734]
The emoji prediction task aims at predicting the proper set of emojis associated with a piece of text.
We extend the existing setting of the emoji prediction task to include a richer set of emojis and to allow multi-label classification.
We propose novel models for multi-class and multi-label emoji prediction based on Transformer networks.
arXiv Detail & Related papers (2020-07-14T22:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.