Towards a New Science of Disinformation
- URL: http://arxiv.org/abs/2204.01489v1
- Date: Thu, 17 Mar 2022 19:10:29 GMT
- Title: Towards a New Science of Disinformation
- Authors: Claudio S. Pinhanez, German H. Flores, Marisa A. Vasconcelos, Mu Qiao,
Nick Linck, Rog\'erio de Paula, Yuya J. Ong
- Abstract summary: We discuss how to best address the dangerous impact that deep learning-generated fake audios, photographs, and videos (a.k.a. deepfakes) may have in personal and societal life.
We propose that a new Science of Disinformation is needed, one which creates a theoretical framework both for the processes of communication and consumption of false content.
- Score: 3.7305343461339664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we best address the dangerous impact that deep learning-generated
fake audios, photographs, and videos (a.k.a. deepfakes) may have in personal
and societal life? We foresee that the availability of cheap deepfake
technology will create a second wave of disinformation where people will
receive specific, personalized disinformation through different channels,
making the current approaches to fight disinformation obsolete. We argue that
fake media has to be seen as an upcoming cybersecurity problem, and we have to
shift from combating its spread to a prevention and cure framework where users
have available ways to verify, challenge, and argue against the veracity of
each piece of media they are exposed to. To create the technologies behind this
framework, we propose that a new Science of Disinformation is needed, one which
creates a theoretical framework both for the processes of communication and
consumption of false content. Key scientific and technological challenges
facing this research agenda are listed and discussed in the light of
state-of-art technologies for fake media generation and detection, argument
finding and construction, and how to effectively engage users in the prevention
and cure processes.
Related papers
- Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - Finding Fake News Websites in the Wild [0.0860395700487494]
We propose a novel methodology for identifying websites responsible for creating and disseminating misinformation content.
We validate our approach on Twitter by examining various execution modes and contexts.
arXiv Detail & Related papers (2024-07-09T18:00:12Z) - Deepfake Generation and Detection: A Benchmark and Survey [134.19054491600832]
Deepfake is a technology dedicated to creating highly realistic facial images and videos under specific conditions.
This survey comprehensively reviews the latest developments in deepfake generation and detection.
We focus on researching four representative deepfake fields: face swapping, face reenactment, talking face generation, and facial attribute editing.
arXiv Detail & Related papers (2024-03-26T17:12:34Z) - The Age of Synthetic Realities: Challenges and Opportunities [85.058932103181]
We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality.
Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations.
This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science.
arXiv Detail & Related papers (2023-06-09T15:55:10Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - Using Deep Learning to Detecting Deepfakes [0.0]
Deepfakes are videos or images that replace one persons face with another computer-generated face, often a more recognizable person in society.
To combat this online threat, researchers have developed models that are designed to detect deepfakes.
This study looks at various deepfake detection models that use deep learning algorithms to combat this looming threat.
arXiv Detail & Related papers (2022-07-27T17:05:16Z) - Deepfakes Generation and Detection: State-of-the-art, open challenges,
countermeasures, and way forward [2.15242029196761]
It is possible to generate deepfakes to disseminate disinformation, revenge porn, financial frauds, hoaxes, and to disrupt government functioning.
No attempt has been made to review approaches for detection and generation of both audio and video deepfakes.
This paper provides a comprehensive review and detailed analysis of existing tools and machine learning (ML) based approaches for deepfake generation.
arXiv Detail & Related papers (2021-02-25T18:26:50Z) - An Agenda for Disinformation Research [3.083055913556838]
Disinformation erodes trust in the socio-political institutions that are the fundamental fabric of democracy.
The distribution of false, misleading, or inaccurate information with the intent to deceive is an existential threat to the United States.
New tools and approaches must be developed to leverage these affordances to understand and address this growing challenge.
arXiv Detail & Related papers (2020-12-15T19:32:36Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Mining Disinformation and Fake News: Concepts, Methods, and Recent
Advancements [55.33496599723126]
disinformation including fake news has become a global phenomenon due to its explosive growth.
Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation.
arXiv Detail & Related papers (2020-01-02T21:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.