Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics
- URL: http://arxiv.org/abs/2101.01076v1
- Date: Mon, 21 Dec 2020 15:49:19 GMT
- Title: Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics
- Authors: Jiaheng Xie, Yidong Chai, Xiao Liu
- Abstract summary: This study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD) to predict health misinformation transmission in social media.
We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos.
Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
- Score: 6.08461198240039
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Health misinformation on social media devastates physical and mental health,
invalidates health gains, and potentially costs lives. Understanding how health
misinformation is transmitted is an urgent goal for researchers, social media
platforms, health sectors, and policymakers to mitigate those ramifications.
Deep learning methods have been deployed to predict the spread of
misinformation. While achieving the state-of-the-art predictive performance,
deep learning methods lack the interpretability due to their blackbox nature.
To remedy this gap, this study proposes a novel interpretable deep learning
approach, Generative Adversarial Network based Piecewise Wide and Attention
Deep Learning (GAN-PiWAD), to predict health misinformation transmission in
social media. Improving upon state-of-the-art interpretable methods, GAN-PiWAD
captures the interactions among multi-modal data, offers unbiased estimation of
the total effect of each feature, and models the dynamic total effect of each
feature when its value varies. We select features according to social exchange
theory and evaluate GAN-PiWAD on 4,445 misinformation videos. The proposed
approach outperformed strong benchmarks. Interpretation of GAN-PiWAD indicates
video description, negative video content, and channel credibility are key
features that drive viral transmission of misinformation. This study
contributes to IS with a novel interpretable deep learning method that is
generalizable to understand other human decision factors. Our findings provide
direct implications for social media platforms and policymakers to design
proactive interventions to identify misinformation, control transmissions, and
manage infodemics.
Related papers
- Intervention strategies for misinformation sharing on social media: A bibliometric analysis [1.8020166013859684]
Inaccurate shared information causes confusion, can adversely affect mental health, and can lead to mis-informed decision-making.
This study explores the typology of intervention strategies for addressing misinformation sharing on social media.
It identifies 4 important clusters - cognition-based, automated-based, information-based, and hybrid-based.
arXiv Detail & Related papers (2024-09-26T08:38:15Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - On Curating Responsible and Representative Healthcare Video
Recommendations for Patient Education and Health Literacy: An Augmented
Intelligence Approach [5.545277272908999]
One in three U.S. adults use the Internet to diagnose or learn about a health concern.
Health literacy divides can be exacerbated by algorithmic recommendations.
arXiv Detail & Related papers (2022-07-13T01:54:59Z) - Case Study on Detecting COVID-19 Health-Related Misinformation in Social
Media [7.194177427819438]
This paper presents a mechanism to detect COVID-19 health-related misinformation in social media.
We defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques.
Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information.
arXiv Detail & Related papers (2021-06-12T16:26:04Z) - Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation [0.0]
This study classifies and visualizes the spread of misinformation on a social media network using publicly available Twitter data.
The study further demonstrates the suitability of BERT for providing a scalable model for false information detection.
arXiv Detail & Related papers (2021-06-03T16:34:54Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Independent Component Analysis for Trustworthy Cyberspace during High
Impact Events: An Application to Covid-19 [4.629100947762816]
Social media has become an important communication channel during high impact events, such as the COVID-19 pandemic.
As misinformation in social media can rapidly spread, creating social unrest, curtailing the spread of misinformation during such events is a significant data challenge.
We propose a data-driven solution that is based on the ICA model, such that knowledge discovery and detection of misinformation are achieved jointly.
arXiv Detail & Related papers (2020-06-01T21:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.