Using the profile of publishers to predict barriers across news articles
- URL: http://arxiv.org/abs/2301.05535v1
- Date: Fri, 13 Jan 2023 13:32:42 GMT
- Title: Using the profile of publishers to predict barriers across news articles
- Authors: Abdul Sittar, Dunja Mladenic
- Abstract summary: We present an approach to barrier detection in news spreading by utilizing Wikipedia-concepts and metadata associated with each barrier.
We believe that our approach can serve to provide useful insights which pave the way for the future development of a system for predicting information spreading barriers over the news.
- Score: 0.685316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detection of news propagation barriers, being economical, cultural,
political, time zonal, or geographical, is still an open research issue. We
present an approach to barrier detection in news spreading by utilizing
Wikipedia-concepts and metadata associated with each barrier. Solving this
problem can not only convey the information about the coverage of an event but
it can also show whether an event has been able to cross a specific barrier or
not. Experimental results on IPoNews dataset (dataset for information spreading
over the news) reveals that simple classification models are able to detect
barriers with high accuracy. We believe that our approach can serve to provide
useful insights which pave the way for the future development of a system for
predicting information spreading barriers over the news.
Related papers
- Prompt-and-Align: Prompt-Based Social Alignment for Few-Shot Fake News
Detection [50.07850264495737]
"Prompt-and-Align" (P&A) is a novel prompt-based paradigm for few-shot fake news detection.
We show that P&A sets new states-of-the-art for few-shot fake news detection performance by significant margins.
arXiv Detail & Related papers (2023-09-28T13:19:43Z) - Unsupervised Domain-agnostic Fake News Detection using Multi-modal Weak
Signals [19.22829945777267]
This work proposes an effective framework for unsupervised fake news detection, which first embeds the knowledge available in four modalities in news records.
Also, we propose a novel technique to construct news datasets minimizing the latent biases in existing news datasets.
We trained the proposed unsupervised framework using LUND-COVID to exploit the potential of large datasets.
arXiv Detail & Related papers (2023-05-18T23:49:31Z) - Classification of news spreading barriers [3.0036519884678894]
We propose an approach to barrier classification where we infer the semantics of news articles through Wikipedia concepts.
We collect news articles and annotated them for different kinds of barriers using the metadata of news publishers.
We utilize the Wikipedia concepts along with the body text of news articles as features to infer the news-spreading barriers.
arXiv Detail & Related papers (2023-04-10T20:13:54Z) - A Bayesian Framework for Information-Theoretic Probing [51.98576673620385]
We argue that probing should be seen as approximating a mutual information.
This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences.
This paper proposes a new framework to measure what we term Bayesian mutual information.
arXiv Detail & Related papers (2021-09-08T18:08:36Z) - Event-Related Bias Removal for Real-time Disaster Events [67.2965372987723]
Social media has become an important tool to share information about crisis events such as natural disasters and mass attacks.
Detecting actionable posts that contain useful information requires rapid analysis of huge volume of data in real-time.
We train an adversarial neural model to remove latent event-specific biases and improve the performance on tweet importance classification.
arXiv Detail & Related papers (2020-11-02T02:03:07Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.