PolyHope: Two-Level Hope Speech Detection from Tweets
- URL: http://arxiv.org/abs/2210.14136v2
- Date: Thu, 3 Nov 2022 19:54:01 GMT
- Title: PolyHope: Two-Level Hope Speech Detection from Tweets
- Authors: Fazlourrahman Balouchzahi and Grigori Sidorov and Alexander Gelbukh
- Abstract summary: Despite its importance, hope has rarely been studied as a social media analysis task.
This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope"
English tweets in the first half of 2022 were collected to build this dataset.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hope is characterized as openness of spirit toward the future, a desire,
expectation, and wish for something to happen or to be true that remarkably
affects human's state of mind, emotions, behaviors, and decisions. Hope is
usually associated with concepts of desired expectations and
possibility/probability concerning the future. Despite its importance, hope has
rarely been studied as a social media analysis task. This paper presents a hope
speech dataset that classifies each tweet first into "Hope" and "Not Hope",
then into three fine-grained hope categories: "Generalized Hope", "Realistic
Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the
first half of 2022 were collected to build this dataset. Furthermore, we
describe our annotation process and guidelines in detail and discuss the
challenges of classifying hope and the limitations of the existing hope speech
detection corpora. In addition, we reported several baselines based on
different learning approaches, such as traditional machine learning, deep
learning, and transformers, to benchmark our dataset. We evaluated our
baselines using weighted-averaged and macro-averaged F1-scores. Observations
show that a strict process for annotator selection and detailed annotation
guidelines enhanced the dataset's quality. This strict annotation process
resulted in promising performance for simple machine learning classifiers with
only bi-grams; however, binary and multiclass hope speech detection results
reveal that contextual embedding models have higher performance in this
dataset.
Related papers
- Beyond Negativity: Re-Analysis and Follow-Up Experiments on Hope Speech
Detection [0.0]
Hope speech refers to comments, posts and other social media messages that offer support, reassurance, suggestions, inspiration, and insight.
Our study aims to find efficient yet comparable/superior methods for hope speech detection.
arXiv Detail & Related papers (2023-05-10T18:38:48Z) - Hope Speech Detection on Social Media Platforms [1.2561455657923906]
This paper discusses various machine learning approaches to identify a sentence as Hope Speech, Non-Hope Speech, or a Neutral sentence.
The dataset used in the study contains English YouTube comments.
arXiv Detail & Related papers (2022-11-14T10:58:22Z) - Hope Speech detection in under-resourced Kannada language [0.1759008116536278]
We propose creating an English-Kannada Hope speech dataset, KanHope.
The dataset consists of 6,176 user-generated comments in code mixed Kannada scraped from YouTube.
We introduce DC-BERT4HOPE, a dual-channel model that uses the English translation of KanHope for additional training to promote hope speech detection.
arXiv Detail & Related papers (2021-08-10T11:59:42Z) - What Is Considered Complete for Visual Recognition? [110.43159801737222]
We advocate for a new type of pre-training task named learning-by-compression.
The computational models are optimized to represent the visual data using compact features.
Semantic annotations, when available, play the role of weak supervision.
arXiv Detail & Related papers (2021-05-28T16:59:14Z) - Learning to Anticipate Egocentric Actions by Imagination [60.21323541219304]
We study the egocentric action anticipation task, which predicts future action seconds before it is performed for egocentric videos.
Our method significantly outperforms previous methods on both the seen test set and the unseen test set of the EPIC Kitchens Action Anticipation Challenge.
arXiv Detail & Related papers (2021-01-13T08:04:10Z) - What is More Likely to Happen Next? Video-and-Language Future Event
Prediction [111.93601253692165]
Given a video with aligned dialogue, people can often infer what is more likely to happen next.
In this work, we explore whether AI models are able to learn to make such multimodal commonsense next-event predictions.
We collect a new dataset, named Video-and-Language Event Prediction, with 28,726 future event prediction examples.
arXiv Detail & Related papers (2020-10-15T19:56:47Z) - Deep Sequence Learning for Video Anticipation: From Discrete and
Deterministic to Continuous and Stochastic [1.52292571922932]
Video anticipation is the task of predicting one/multiple future representation(s) given limited, partial observation.
In particular, in this thesis, we make several contributions to the literature of video anticipation.
arXiv Detail & Related papers (2020-10-09T04:40:58Z) - Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities [56.1344233010643]
Several features are considered to contribute towards learner attrition or lack of interest, which may lead to disengagement or total dropout.
This study aims to predict dropout early-on, from the first week, by comparing several machine-learning approaches.
arXiv Detail & Related papers (2020-08-12T10:44:49Z) - Improved Speech Representations with Multi-Target Autoregressive
Predictive Coding [23.424410568555547]
We extend the hypothesis that hidden states that can accurately predict future frames are a useful representation for many downstream tasks.
We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task.
arXiv Detail & Related papers (2020-04-11T01:09:36Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.