Contrastive News and Social Media Linking using BERT for Articles and
Tweets across Dual Platforms
- URL: http://arxiv.org/abs/2312.07599v1
- Date: Mon, 11 Dec 2023 13:38:16 GMT
- Title: Contrastive News and Social Media Linking using BERT for Articles and
Tweets across Dual Platforms
- Authors: Jan Piotrowski, Marek Wachnicki, Mateusz Perlik, Jakub Podolak,
Grzegorz Rucki, Micha{\l} Brzozowski, Pawe{\l} Olejnik, Julian Koz{\l}owski,
Tomasz Noco\'n, Jakub Kozie{\l}, Stanis{\l}aw Gizi\'nski and Piotr Sankowski
- Abstract summary: This paper introduces a contrastive learning approach for training a representation space where linked articles and tweets exhibit proximity.
We present our contrastive learning approach, CATBERT (Contrastive Articles Tweets BERT), leveraging pre-trained BERT models.
Our findings indicate that CATBERT demonstrates superior performance in associating tweets with relevant news articles.
- Score: 1.5409664608353888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: X (formerly Twitter) has evolved into a contemporary agora, offering a
platform for individuals to express opinions and viewpoints on current events.
The majority of the topics discussed on Twitter are directly related to ongoing
events, making it an important source for monitoring public discourse. However,
linking tweets to specific news presents a significant challenge due to their
concise and informal nature. Previous approaches, including topic models,
graph-based models, and supervised classifiers, have fallen short in
effectively capturing the unique characteristics of tweets and articles.
Inspired by the success of the CLIP model in computer vision, which employs
contrastive learning to model similarities between images and captions, this
paper introduces a contrastive learning approach for training a representation
space where linked articles and tweets exhibit proximity. We present our
contrastive learning approach, CATBERT (Contrastive Articles Tweets BERT),
leveraging pre-trained BERT models. The model is trained and tested on a
dataset containing manually labeled English and Polish tweets and articles
related to the Russian-Ukrainian war. We evaluate CATBERT's performance against
traditional approaches like LDA, and the novel method based on OpenAI
embeddings, which has not been previously applied to this task. Our findings
indicate that CATBERT demonstrates superior performance in associating tweets
with relevant news articles. Furthermore, we demonstrate the performance of the
models when applied to finding the main topic -- represented by an article --
of the whole cascade of tweets. In this new task, we report the performance of
the different models in dependence on the cascade size.
Related papers
- Hashing it Out: Predicting Unhealthy Conversations on Twitter [0.17175853976270528]
We show that an Attention-based BERT architecture, pre-trained on a large Twitter corpus, is efficient and effective in making such predictions.
This work lays the foundation for a practical tool to encourage better interactions on one of the most ubiquitous social media platforms.
arXiv Detail & Related papers (2023-11-17T15:49:11Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - CaMEL: Mean Teacher Learning for Image Captioning [47.9708610052655]
We present CaMEL, a novel Transformer-based architecture for image captioning.
Our proposed approach leverages the interaction of two interconnected language models that learn from each other during the training phase.
Experimentally, we assess the effectiveness of the proposed solution on the COCO dataset and in conjunction with different visual feature extractors.
arXiv Detail & Related papers (2022-02-21T19:04:46Z) - Dynamic Language Models for Continuously Evolving Content [19.42658043326054]
In recent years, pre-trained language models like BERT greatly improved the state-of-the-art for content understanding tasks.
In this paper, we aim to study how these language models can be adapted to better handle continuously evolving web content.
arXiv Detail & Related papers (2021-06-11T10:33:50Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z) - Pre-Training BERT on Arabic Tweets: Practical Considerations [11.087099497830552]
We pretrained 5 BERT models that differ in the size of their training sets, mixture of formal and informal Arabic, and linguistic preprocessing.
All are intended to support Arabic dialects and social media.
New models achieve new state-of-the-art results on several downstream tasks.
arXiv Detail & Related papers (2021-02-21T20:51:33Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - TweetBERT: A Pretrained Language Representation Model for Twitter Text
Analysis [0.0]
We introduce two TweetBERT models, which are domain specific language presentation models, pre-trained on millions of tweets.
We show that the TweetBERT models significantly outperform the traditional BERT models in Twitter text mining tasks by more than 7% on each Twitter dataset.
arXiv Detail & Related papers (2020-10-17T00:45:02Z) - InfoBERT: Improving Robustness of Language Models from An Information
Theoretic Perspective [84.78604733927887]
Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks.
Recent studies show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks.
We propose InfoBERT, a novel learning framework for robust fine-tuning of pre-trained language models.
arXiv Detail & Related papers (2020-10-05T20:49:26Z) - LTIatCMU at SemEval-2020 Task 11: Incorporating Multi-Level Features for
Multi-Granular Propaganda Span Identification [70.1903083747775]
This paper describes our submission for the task of Propaganda Span Identification in news articles.
We introduce a BERT-BiLSTM based span-level propaganda classification model that identifies which token spans within the sentence are indicative of propaganda.
arXiv Detail & Related papers (2020-08-11T16:14:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.