Visual Selective Attention System to Intervene User Attention in Sharing
COVID-19 Misinformation
- URL: http://arxiv.org/abs/2110.13489v2
- Date: Tue, 9 Nov 2021 23:01:50 GMT
- Title: Visual Selective Attention System to Intervene User Attention in Sharing
COVID-19 Misinformation
- Authors: Zaid Amin, Nazlena Mohamad Ali, Alan F. Smeaton
- Abstract summary: This study aims to intervene in the user's attention with a visual selective attention approach.
The results are expected to be the basis for developing social media applications to combat the negative impact of the infodemic COVID-19 misinformation.
- Score: 2.7393821783237184
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Information sharing on social media must be accompanied by attentive behavior
so that in a distorted digital environment, users are not rushed and distracted
in deciding to share information. The spread of misinformation, especially
those related to the COVID-19, can divide and create negative effects of
falsehood in society. Individuals can also cause feelings of fear, health
anxiety, and confusion in the treatment COVID-19. Although much research has
focused on understanding human judgment from a psychological underline, few
have addressed the essential issue in the screening phase of what technology
can interfere amidst users' attention in sharing information. This research
aims to intervene in the user's attention with a visual selective attention
approach. This study uses a quantitative method through studies 1 and 2 with
pre-and post-intervention experiments. In study 1, we intervened in user
decisions and attention by stimulating ten information and misinformation using
the Visual Selective Attention System (VSAS) tool. In Study 2, we identified
associations of user tendencies in evaluating information using the Implicit
Association Test (IAT). The significant results showed that the user's
attention and decision behavior improved after using the VSAS. The IAT results
show a change in the association of user exposure, where after the intervention
using VSAS, users tend not to share misinformation about COVID-19. The results
are expected to be the basis for developing social media applications to combat
the negative impact of the infodemic COVID-19 misinformation.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Understanding the Humans Behind Online Misinformation: An Observational
Study Through the Lens of the COVID-19 Pandemic [12.873747057824833]
We conduct a large-scale observational study analyzing over 32 million COVID-19 tweets and 16 million historical timeline tweets.
We focus on understanding the behavior and psychology of users disseminating misinformation during COVID-19 and its relationship with the historical inclinations towards sharing misinformation on Non-COVID domains before the pandemic.
arXiv Detail & Related papers (2023-10-12T16:42:53Z) - A Comprehensive Picture of Factors Affecting User Willingness to Use
Mobile Health Applications [62.60524178293434]
The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps.
Users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information.
Users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect.
arXiv Detail & Related papers (2023-05-10T08:11:21Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - The Effects of Interactive AI Design on User Behavior: An Eye-tracking
Study of Fact-checking COVID-19 Claims [12.00747200817161]
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions.
We found that the presence of interactively manipulating the AI system's prediction parameters affected users' dwell times, and eye-fixations on AOIs, but not mental workload.
arXiv Detail & Related papers (2022-02-17T21:08:57Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Categorising Fine-to-Coarse Grained Misinformation: An Empirical Study
of COVID-19 Infodemic [6.137022734902771]
We introduce a fine-grained annotated misinformation tweets dataset including social behaviours annotation.
The dataset not only allows social behaviours analysis but also suitable for both evidence-based or non-evidence-based misinformation classification task.
arXiv Detail & Related papers (2021-06-22T12:17:53Z) - Case Study on Detecting COVID-19 Health-Related Misinformation in Social
Media [7.194177427819438]
This paper presents a mechanism to detect COVID-19 health-related misinformation in social media.
We defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques.
Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information.
arXiv Detail & Related papers (2021-06-12T16:26:04Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.