A Structured Analysis of Journalistic Evaluations for News Source
Reliability
- URL: http://arxiv.org/abs/2205.02736v1
- Date: Thu, 5 May 2022 16:16:03 GMT
- Title: A Structured Analysis of Journalistic Evaluations for News Source
Reliability
- Authors: Manuel Pratelli, Marinella Petrocchi
- Abstract summary: We evaluate two procedures for assessing the risk of online media exposing their readers to m/disinformation.
The result of our analysis shows a good degree of agreement, which in our opinion has a double value.
- Score: 0.456877715768796
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In today's era of information disorder, many organizations are moving to
verify the veracity of news published on the web and social media. In
particular, some agencies are exploring the world of online media and, through
a largely manual process, ranking the credibility and transparency of news
sources around the world. In this paper, we evaluate two procedures for
assessing the risk of online media exposing their readers to m/disinformation.
The procedures have been dictated by NewsGuard and The Global Disinformation
Index, two well-known organizations combating d/misinformation via practices of
good journalism. Specifically, considering a fixed set of media outlets, we
examine how many of them were rated equally by the two procedures, and which
aspects led to disagreement in the assessment. The result of our analysis shows
a good degree of agreement, which in our opinion has a double value: it
fortifies the correctness of the procedures and lays the groundwork for their
automation.
Related papers
- Mapping the Media Landscape: Predicting Factual Reporting and Political Bias Through Web Interactions [0.7249731529275342]
We propose an extension to a recently presented news media reliability estimation method.
We assess the classification performance of four reinforcement learning strategies on a large news media hyperlink graph.
Our experiments, targeting two challenging bias descriptors, factual reporting and political bias, showed a significant performance improvement at the source media level.
arXiv Detail & Related papers (2024-10-23T08:18:26Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Towards Corpus-Scale Discovery of Selection Biases in News Coverage:
Comparing What Sources Say About Entities as a Start [65.28355014154549]
This paper investigates the challenges of building scalable NLP systems for discovering patterns of media selection biases directly from news content in massive-scale news corpora.
We show the capabilities of the framework through a case study on NELA-2020, a corpus of 1.8M news articles in English from 519 news sources worldwide.
arXiv Detail & Related papers (2023-04-06T23:36:45Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - How to Effectively Identify and Communicate Person-Targeting Media Bias
in Daily News Consumption? [8.586057042714698]
We present an in-progress system for news recommendation that is the first to automate the manual procedure of content analysis.
Our recommender detects and reveals substantial frames that are actually present in individual news articles.
Our study shows that recommending news articles that differently frame an event significantly improves respondents' awareness of bias.
arXiv Detail & Related papers (2021-10-18T10:13:23Z) - Stance Detection with BERT Embeddings for Credibility Analysis of
Information on Social Media [1.7616042687330642]
We propose a model for detecting fake news using stance as one of the features along with the content of the article.
Our work interprets the content with automatic feature extraction and the relevance of the text pieces.
The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.
arXiv Detail & Related papers (2021-05-21T10:46:43Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - A Survey on Predicting the Factuality and the Bias of News Media [29.032850263311342]
"The state of the art on media profiling for factuality and bias"
"Political bias detection, which in the Western political landscape is about predicting left-center-right bias"
"Recent advances in using different information sources and modalities"
arXiv Detail & Related papers (2021-03-16T11:11:54Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.