A Comparative Study of How People With and Without ADHD Recognise and Avoid Dark Patterns on Social Media
- URL: http://arxiv.org/abs/2503.05263v1
- Date: Fri, 07 Mar 2025 09:23:45 GMT
- Title: A Comparative Study of How People With and Without ADHD Recognise and Avoid Dark Patterns on Social Media
- Authors: Thomas Mildner, Daniel Fidel, Evropi Stefanidi, Pawel W. Wozniak, Rainer Malaka, Jasmin Niess,
- Abstract summary: We investigate whether people with ADHD recognise and avoid dark patterns on social networking sites.<n>We find that ADHD individuals were able to avoid specific dark patterns more often.<n>Our results advance previous work by understanding dark patterns in a realistic environment and offer insights into their effect on vulnerable populations.
- Score: 40.22828751850003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dark patterns are deceptive strategies that recent work in human-computer interaction (HCI) has captured throughout digital domains, including social networking sites (SNSs). While research has identified difficulties among people to recognise dark patterns effectively, few studies consider vulnerable populations and their experience in this regard, including people with attention deficit hyperactivity disorder (ADHD), who may be especially susceptible to attention-grabbing tricks. Based on an interactive web study with 135 participants, we investigate SNS users' ability to recognise and avoid dark patterns by comparing results from participants with and without ADHD. In line with prior work, we noticed overall low recognition of dark patterns with no significant differences between the two groups. Yet, ADHD individuals were able to avoid specific dark patterns more often. Our results advance previous work by understanding dark patterns in a realistic environment and offer insights into their effect on vulnerable populations.
Related papers
- AI-Based Screening for Depression and Social Anxiety Through Eye Tracking: An Exploratory Study [1.7249361224827533]
Reduced well-being is often linked to depression or anxiety disorders.
This paper introduces a novel approach to AI-assisted screening of affective disorders by analysing visual attention scan paths.
arXiv Detail & Related papers (2025-03-22T02:53:02Z) - Visual Stereotypes of Autism Spectrum in DALL-E, Stable Diffusion, SDXL, and Midjourney [0.0]
Our study investigated how text-to-image models unintentionally perpetuate non-rational beliefs regarding autism.
The research protocol involved generating images based on 53 prompts aimed at visualizing concrete objects and abstract concepts related to autism.
arXiv Detail & Related papers (2024-07-23T08:48:09Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's
Disease Using 3D Comprehensive Visual Stimuli [8.987083026829517]
Alzheimer's Disease (AD) causes a continuous decline in memory, thinking, and judgment.
In this paper, we focus on exploiting deep learning techniques to diagnose AD based on eye-tracking behaviors.
Visual attention, as typical eye-tracking behavior, is of great clinical value to detect cognitive abnormalities in AD patients.
We propose a multi-layered comparison convolution neural network (MC-CNN) to distinguish the visual attention differences between AD patients and normals.
arXiv Detail & Related papers (2023-03-13T05:33:28Z) - About Engaging and Governing Strategies: A Thematic Analysis of Dark
Patterns in Social Networking Services [30.817063916361892]
We collected over 16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and Twitter's mobile applications.
We observed which instances occur in SNSs and identified two strategies - engaging and governing.
arXiv Detail & Related papers (2023-03-01T13:03:29Z) - Deep Intra-Image Contrastive Learning for Weakly Supervised One-Step
Person Search [98.2559247611821]
We present a novel deep intra-image contrastive learning using a Siamese network.
Our method achieves a state-of-the-art performance among weakly supervised one-step person search approaches.
arXiv Detail & Related papers (2023-02-09T12:45:20Z) - Detection of ADHD based on Eye Movements during Natural Viewing [3.1890959219836574]
ADHD is a neurodevelopmental disorder that is highly prevalent and requires clinical specialists to diagnose.
We develop an end-to-end deep learning-based sequence model which we pre-train on a related task.
We find that the method is in fact able to detect ADHD and outperforms relevant baselines.
arXiv Detail & Related papers (2022-07-04T12:56:04Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.