Emoji Retrieval from Gibberish or Garbled Social Media Text: A Novel Methodology and A Case Study
- URL: http://arxiv.org/abs/2412.18046v1
- Date: Mon, 23 Dec 2024 23:44:13 GMT
- Title: Emoji Retrieval from Gibberish or Garbled Social Media Text: A Novel Methodology and A Case Study
- Authors: Shuqi Cui, Nirmalya Thakur, Audrey Poon,
- Abstract summary: Emojis are widely used across social media platforms but are often lost in noisy or garbled text.
This paper proposes a three-step reverse-engineering methodology to retrieve emojis from garbled text in social media posts.
- Score: 0.0
- License:
- Abstract: Emojis are widely used across social media platforms but are often lost in noisy or garbled text, posing challenges for data analysis and machine learning. Conventional preprocessing approaches recommend removing such text, risking the loss of emojis and their contextual meaning. This paper proposes a three-step reverse-engineering methodology to retrieve emojis from garbled text in social media posts. The methodology also identifies reasons for the generation of such text during social media data mining. To evaluate its effectiveness, the approach was applied to 509,248 Tweets about the Mpox outbreak, a dataset referenced in about 30 prior works that failed to retrieve emojis from garbled text. Our method retrieved 157,748 emojis from 76,914 Tweets. Improvements in text readability and coherence were demonstrated through metrics such as Flesch Reading Ease, Flesch-Kincaid Grade Level, Coleman-Liau Index, Automated Readability Index, Dale-Chall Readability Score, Text Standard, and Reading Time. Additionally, the frequency of individual emojis and their patterns of usage in these Tweets were analyzed, and the results are presented.
Related papers
- Unleashing the Power of Emojis in Texts via Self-supervised Graph Pre-Training [22.452853652070413]
We release the emoji's power in social media data mining.
We propose a graph pre-train framework for text and emoji co-modeling.
arXiv Detail & Related papers (2024-09-22T18:29:10Z) - Semantics Preserving Emoji Recommendation with Large Language Models [47.94761630160614]
Existing emoji recommendation methods are primarily evaluated based on their ability to match the exact emoji a user chooses in the original text.
We propose a new semantics preserving evaluation framework for emoji recommendation, which measures a model's ability to recommend emojis that maintain the semantic consistency with the user's text.
arXiv Detail & Related papers (2024-09-16T22:27:46Z) - EmojiLM: Modeling the New Emoji Language [44.23076273155259]
We develop a text-emoji parallel corpus, Text2Emoji, from a large language model.
Based on the parallel corpus, we distill a sequence-to-sequence model, EmojiLM, which is specialized in the text-emoji bidirectional translation.
Our proposed model outperforms strong baselines and the parallel corpus benefits emoji-related downstream tasks.
arXiv Detail & Related papers (2023-11-03T07:06:51Z) - Emoji Prediction in Tweets using BERT [0.0]
We propose a transformer-based approach for emoji prediction using BERT, a widely-used pre-trained language model.
We fine-tuned BERT on a large corpus of text (tweets) containing both text and emojis to predict the most appropriate emoji for a given text.
Our experimental results demonstrate that our approach outperforms several state-of-the-art models in predicting emojis with an accuracy of over 75 percent.
arXiv Detail & Related papers (2023-07-05T06:38:52Z) - Emojich -- zero-shot emoji generation using Russian language: a
technical report [52.77024349608834]
"Emojich" is a text-to-image neural network that generates emojis using captions in Russian language as a condition.
We aim to keep the generalization ability of a pretrained big model ruDALL-E Malevich (XL) 1.3B parameters at the fine-tuning stage.
arXiv Detail & Related papers (2021-12-04T23:37:32Z) - Emoji-aware Co-attention Network with EmoGraph2vec Model for Sentiment
Anaylsis [9.447106020795292]
We propose a method to learn emoji representations called EmoGraph2vec and design an emoji-aware co-attention network.
Our model designs a co-attention mechanism to incorporate the text and emojis, and integrates a squeeze-and-excitation block into a convolutional neural network.
Experimental results show that the proposed model can outperform several baselines for sentiment analysis on benchmark datasets.
arXiv Detail & Related papers (2021-10-27T08:01:10Z) - Semantic Journeys: Quantifying Change in Emoji Meaning from 2012-2018 [66.28665205489845]
We offer the first longitudinal study of how emoji semantics changes over time, applying techniques from computational linguistics to six years of Twitter data.
We identify five patterns in emoji semantic development and find evidence that the less abstract an emoji is, the more likely it is to undergo semantic change.
To aid future work on emoji and semantics, we make our data publicly available along with a web-based interface that anyone can use to explore semantic change in emoji.
arXiv Detail & Related papers (2021-05-03T13:35:10Z) - A `Sourceful' Twist: Emoji Prediction Based on Sentiment, Hashtags and
Application Source [1.6818451361240172]
We showcase the importance of using Twitter features to help the model understand the sentiment involved and hence to predict the most suitable emoji for the text.
Our data analysis and neural network model performance evaluations depict that using hashtags and application sources as features allows to encode different information and is effective in emoji prediction.
arXiv Detail & Related papers (2021-03-14T03:05:04Z) - Assessing Emoji Use in Modern Text Processing Tools [35.79765461713127]
Emojis have become ubiquitous in digital communication, due to their visual appeal as well as their ability to vividly convey human emotion.
The growing prominence of emojis in social media and other instant messaging also leads to an increased need for systems and tools to operate on text containing emojis.
In this study, we assess this support by considering test sets of tweets with emojis, based on which we perform a series of experiments investigating the ability of prominent NLP and text processing tools to adequately process them.
arXiv Detail & Related papers (2021-01-02T11:38:05Z) - Forensic Authorship Analysis of Microblogging Texts Using N-Grams and
Stylometric Features [63.48764893706088]
This work aims at identifying authors of tweet messages, which are limited to 280 characters.
We use for our experiments a self-captured database of 40 users, with 120 to 200 tweets per user.
Results using this small set are promising, with the different features providing a classification accuracy between 92% and 98.5%.
arXiv Detail & Related papers (2020-03-24T19:32:11Z) - TextScanner: Reading Characters in Order for Robust Scene Text
Recognition [60.04267660533966]
TextScanner is an alternative approach for scene text recognition.
It generates pixel-wise, multi-channel segmentation maps for character class, position and order.
It also adopts RNN for context modeling and performs paralleled prediction for character position and class.
arXiv Detail & Related papers (2019-12-28T07:52:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.