Explaining Website Reliability by Visualizing Hyperlink Connectivity
- URL: http://arxiv.org/abs/2210.00160v1
- Date: Sat, 1 Oct 2022 01:39:08 GMT
- Title: Explaining Website Reliability by Visualizing Hyperlink Connectivity
- Authors: Seongmin Lee, Sadia Afroz, Haekyu Park, Zijie J. Wang, Omar Shaikh,
Vibhor Sehgal, Ankit Peshin, Duen Horng Chau
- Abstract summary: MisVis is a web-based interactive visualization tool that helps users assess a website's reliability.
MisVis visualizes the hyperlink connectivity of the website and summarizes key characteristics of the Twitter accounts that mention the site.
A large-scale user study with 139 participants demonstrates that MisVis facilitates users to assess and understand false information on the web.
- Score: 18.233714306827736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the information on the Internet continues growing exponentially,
understanding and assessing the reliability of a website is becoming
increasingly important. Misinformation has far-ranging repercussions, from
sowing mistrust in media to undermining democratic elections. While some
research investigates how to alert people to misinformation on the web, much
less research has been conducted on explaining how websites engage in spreading
false information. To fill the research gap, we present MisVis, a web-based
interactive visualization tool that helps users assess a website's reliability
by understanding how it engages in spreading false information on the World
Wide Web. MisVis visualizes the hyperlink connectivity of the website and
summarizes key characteristics of the Twitter accounts that mention the site. A
large-scale user study with 139 participants demonstrates that MisVis
facilitates users to assess and understand false information on the web and
node-link diagrams can be used to communicate with non-experts. MisVis is
available at the public demo link: https://poloclub.github.io/MisVis.
Related papers
- Finding Fake News Websites in the Wild [0.0860395700487494]
We propose a novel methodology for identifying websites responsible for creating and disseminating misinformation content.
We validate our approach on Twitter by examining various execution modes and contexts.
arXiv Detail & Related papers (2024-07-09T18:00:12Z) - News and Misinformation Consumption in Europe: A Longitudinal
Cross-Country Perspective [49.1574468325115]
This study investigated information consumption in four European countries.
It analyzed three years of Twitter activity from news outlet accounts in France, Germany, Italy, and the UK.
Results indicate that reliable sources dominate the information landscape, although unreliable content is still present across all countries.
arXiv Detail & Related papers (2023-11-09T16:22:10Z) - MIDDAG: Where Does Our News Go? Investigating Information Diffusion via
Community-Level Information Pathways [114.42360191723469]
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles.
We construct communities among users and develop the propagation forecasting capability, enabling tracing and understanding of how information is disseminated at a higher level.
arXiv Detail & Related papers (2023-10-04T02:08:11Z) - Online search is more likely to lead students to validate true news than to refute false ones [0.32207415805366035]
This work focuses on understanding how young people perceive and deal with false information.
Our results suggest that online search is more likely to lead students to validate true news than to refute false ones.
This work provides a principled understanding of how young people perceive and distinguish true and false pieces of information.
arXiv Detail & Related papers (2023-03-23T09:43:32Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation [0.0]
This study classifies and visualizes the spread of misinformation on a social media network using publicly available Twitter data.
The study further demonstrates the suitability of BERT for providing a scalable model for false information detection.
arXiv Detail & Related papers (2021-06-03T16:34:54Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Identifying Disinformation Websites Using Infrastructure Features [11.180267856391362]
We explore a new direction for automated detection of disinformation websites: infrastructure features.
Our hypothesis is that while disinformation websites may be perceptually similar to authentic news websites, there may also be significant non-perceptual differences in the domain registrations, TLS/SSL certificates, and web hosting configurations.
arXiv Detail & Related papers (2020-02-28T18:40:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.