PROVENANCE: An Intermediary-Free Solution for Digital Content
Verification
- URL: http://arxiv.org/abs/2111.08791v1
- Date: Tue, 16 Nov 2021 21:42:23 GMT
- Title: PROVENANCE: An Intermediary-Free Solution for Digital Content
Verification
- Authors: Bilal Yousuf, M. Atif Qureshi, Brendan Spillane, Gary Munnelly, Oisin
Carroll, Matthew Runswick, Kirsty Park, Eileen Culloty, Owen Conlan and Jane
Suiter
- Abstract summary: Provenance warns users when the content they are looking at may be misinformation or disinformation.
It is also designed to improve media literacy among its users.
Unlike similar plugins, which require human experts to provide evaluations, Provenance's state of the art technology does not require human input.
- Score: 3.82273842587301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The threat posed by misinformation and disinformation is one of the defining
challenges of the 21st century. Provenance is designed to help combat this
threat by warning users when the content they are looking at may be
misinformation or disinformation. It is also designed to improve media literacy
among its users and ultimately reduce susceptibility to the threat among
vulnerable groups within society. The Provenance browser plugin checks the
content that users see on the Internet and social media and provides warnings
in their browser or social media feed. Unlike similar plugins, which require
human experts to provide evaluations and can only provide simple binary
warnings, Provenance's state of the art technology does not require human input
and it analyses seven aspects of the content users see and provides warnings
where necessary.
Related papers
- A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - Deep Breath: A Machine Learning Browser Extension to Tackle Online
Misinformation [0.0]
This paper proposes a novel system for detecting, processing, and warning users about misleading content online.
By training a machine learning model on an existing dataset of 32,000 clickbait news article headlines, the model predicts how sensationalist a headline is.
It interfaces with a web browser extension which constructs a unique content warning notification based on existing design principles.
arXiv Detail & Related papers (2023-01-09T12:43:58Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - Mitigating Covertly Unsafe Text within Natural Language Systems [55.26364166702625]
Uncontrolled systems may generate recommendations that lead to injury or life-threatening consequences.
In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text.
arXiv Detail & Related papers (2022-10-17T17:59:49Z) - Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats [1.2387676601792899]
We propose novel approaches for detecting misinformation about cybersecurity and privacy threats on social media.
We developed a framework for detecting inaccurate phishing claims on Twitter.
We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms.
arXiv Detail & Related papers (2021-10-23T20:45:24Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Interpretable Propaganda Detection in News Articles [30.192497301608164]
We propose to detect and to show the use of deception techniques as a way to offer interpretability.
Our interpretable features can be easily combined with pre-trained language models, yielding state-of-the-art results.
arXiv Detail & Related papers (2021-08-29T09:57:01Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Adapting Security Warnings to Counter Online Disinformation [6.592035021489205]
We adapt methods and results from the information security warning literature to design effective disinformation warnings.
We found that users routinely ignore contextual warnings, but users notice interstitial warnings.
We found that a warning's design could effectively inform users or convey a risk of harm.
arXiv Detail & Related papers (2020-08-25T01:10:57Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Mining Disinformation and Fake News: Concepts, Methods, and Recent
Advancements [55.33496599723126]
disinformation including fake news has become a global phenomenon due to its explosive growth.
Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation.
arXiv Detail & Related papers (2020-01-02T21:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.