Understanding Textual Emotion Through Emoji Prediction
- URL: http://arxiv.org/abs/2508.10222v1
- Date: Wed, 13 Aug 2025 22:17:00 GMT
- Title: Understanding Textual Emotion Through Emoji Prediction
- Authors: Ethan Gordon, Nishank Kuppa, Rigved Tummala, Sriram Anasuri,
- Abstract summary: This project explores emoji prediction from short text sequences using four deep learning architectures.<n>BERT achieves the highest overall performance due to its pre-training advantage.<n>CNN demonstrates superior efficacy on rare emoji classes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This project explores emoji prediction from short text sequences using four deep learning architectures: a feed-forward network, CNN, transformer, and BERT. Using the TweetEval dataset, we address class imbalance through focal loss and regularization techniques. Results show BERT achieves the highest overall performance due to its pre-training advantage, while CNN demonstrates superior efficacy on rare emoji classes. This research shows the importance of architecture selection and hyperparameter tuning for sentiment-aware emoji prediction, contributing to improved human-computer interaction.
Related papers
- ViGoEmotions: A Benchmark Dataset For Fine-grained Emotion Detection on Vietnamese Texts [5.670093510042766]
This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments.<n>To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated.
arXiv Detail & Related papers (2026-02-09T08:10:40Z) - The Prosody of Emojis [73.70220975424597]
This study examines how emojis influence prosodic realisation in speech and how listeners interpret prosodic cues to recover emoji meanings.<n>Unlike previous work, we directly link prosody and emoji by analysing actual human speech data, collected through structured but open-ended production and perception tasks.<n>Results show that speakers adapt their prosody based on emoji cues, listeners can often identify the intended emoji from prosodic variation alone, and greater semantic differences between emojis correspond to increased prosodic divergence.
arXiv Detail & Related papers (2025-08-01T11:24:12Z) - Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training [22.452853652070413]
We release the emoji's power in social media data mining.
We propose a graph pre-train framework for text and emoji co-modeling.
arXiv Detail & Related papers (2024-09-22T18:29:10Z) - Semantics Preserving Emoji Recommendation with Large Language Models [47.94761630160614]
Existing emoji recommendation methods are primarily evaluated based on their ability to match the exact emoji a user chooses in the original text.
We propose a new semantics preserving evaluation framework for emoji recommendation, which measures a model's ability to recommend emojis that maintain the semantic consistency with the user's text.
arXiv Detail & Related papers (2024-09-16T22:27:46Z) - Emoji Prediction in Tweets using BERT [0.0]
We propose a transformer-based approach for emoji prediction using BERT, a widely-used pre-trained language model.
We fine-tuned BERT on a large corpus of text (tweets) containing both text and emojis to predict the most appropriate emoji for a given text.
Our experimental results demonstrate that our approach outperforms several state-of-the-art models in predicting emojis with an accuracy of over 75 percent.
arXiv Detail & Related papers (2023-07-05T06:38:52Z) - Cross-modality Data Augmentation for End-to-End Sign Language Translation [66.46877279084083]
End-to-end sign language translation (SLT) aims to convert sign language videos into spoken language texts directly without intermediate representations.
It has been a challenging task due to the modality gap between sign videos and texts and the data scarcity of labeled data.
We propose a novel Cross-modality Data Augmentation (XmDA) framework to transfer the powerful gloss-to-text translation capabilities to end-to-end sign language translation.
arXiv Detail & Related papers (2023-05-18T16:34:18Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Emojich -- zero-shot emoji generation using Russian language: a
technical report [52.77024349608834]
"Emojich" is a text-to-image neural network that generates emojis using captions in Russian language as a condition.
We aim to keep the generalization ability of a pretrained big model ruDALL-E Malevich (XL) 1.3B parameters at the fine-tuning stage.
arXiv Detail & Related papers (2021-12-04T23:37:32Z) - Emoji-aware Co-attention Network with EmoGraph2vec Model for Sentiment
Anaylsis [9.447106020795292]
We propose a method to learn emoji representations called EmoGraph2vec and design an emoji-aware co-attention network.
Our model designs a co-attention mechanism to incorporate the text and emojis, and integrates a squeeze-and-excitation block into a convolutional neural network.
Experimental results show that the proposed model can outperform several baselines for sentiment analysis on benchmark datasets.
arXiv Detail & Related papers (2021-10-27T08:01:10Z) - Semantic Journeys: Quantifying Change in Emoji Meaning from 2012-2018 [66.28665205489845]
We offer the first longitudinal study of how emoji semantics changes over time, applying techniques from computational linguistics to six years of Twitter data.
We identify five patterns in emoji semantic development and find evidence that the less abstract an emoji is, the more likely it is to undergo semantic change.
To aid future work on emoji and semantics, we make our data publicly available along with a web-based interface that anyone can use to explore semantic change in emoji.
arXiv Detail & Related papers (2021-05-03T13:35:10Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Emoji Prediction: Extensions and Benchmarking [30.642840676899734]
The emoji prediction task aims at predicting the proper set of emojis associated with a piece of text.
We extend the existing setting of the emoji prediction task to include a richer set of emojis and to allow multi-label classification.
We propose novel models for multi-class and multi-label emoji prediction based on Transformer networks.
arXiv Detail & Related papers (2020-07-14T22:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.