About Engaging and Governing Strategies: A Thematic Analysis of Dark
Patterns in Social Networking Services
- URL: http://arxiv.org/abs/2303.00476v1
- Date: Wed, 1 Mar 2023 13:03:29 GMT
- Title: About Engaging and Governing Strategies: A Thematic Analysis of Dark
Patterns in Social Networking Services
- Authors: Thomas Mildner, Gian-Luca Savino, Philip R. Doyle, Benjamin R. Cowan,
Rainer Malaka
- Abstract summary: We collected over 16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and Twitter's mobile applications.
We observed which instances occur in SNSs and identified two strategies - engaging and governing.
- Score: 30.817063916361892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research in HCI has shown a growing interest in unethical design practices
across numerous domains, often referred to as ``dark patterns''. There is,
however, a gap in related literature regarding social networking services
(SNSs). In this context, studies emphasise a lack of users' self-determination
regarding control over personal data and time spent on SNSs. We collected over
16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and
Twitter's mobile applications to understand how dark patterns manifest in these
SNSs. For this task, we turned towards HCI experts to mitigate possible
difficulties of non-expert participants in recognising dark patterns, as prior
studies have noticed. Supported by the recordings, two authors of this paper
conducted a thematic analysis based on previously described taxonomies,
manually classifying the recorded material while delivering two key findings:
We observed which instances occur in SNSs and identified two strategies -
engaging and governing - with five dark patterns undiscovered before.
Related papers
- Adversarial Training: A Survey [130.89534734092388]
Adversarial training (AT) refers to integrating adversarial examples into the training process.
Recent studies have demonstrated the effectiveness of AT in improving the robustness of deep neural networks against diverse adversarial attacks.
arXiv Detail & Related papers (2024-10-19T08:57:35Z) - Explaining Deep Neural Networks by Leveraging Intrinsic Methods [0.9790236766474201]
This thesis contributes to the field of eXplainable AI, focusing on enhancing the interpretability of deep neural networks.
The core contributions lie in introducing novel techniques aimed at making these networks more interpretable by leveraging an analysis of their inner workings.
Secondly, this research delves into novel investigations on neurons within trained deep neural networks, shedding light on overlooked phenomena related to their activation values.
arXiv Detail & Related papers (2024-07-17T01:20:17Z) - Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses [40.77270226912783]
Model Inversion (MI) attacks disclose private information about the training dataset by abusing access to the trained models.
Despite the rapid advances in the field, we lack a comprehensive and systematic overview of existing MI attacks and defenses.
We elaborately analyze and compare numerous recent attacks and defenses on Deep Neural Networks (DNNs) across multiple modalities and learning tasks.
arXiv Detail & Related papers (2024-02-06T14:06:23Z) - Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to
Conquer Prime Membership Cancellation through the "Iliad Flow" [22.69068051865837]
We present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey.
We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP)
arXiv Detail & Related papers (2023-09-18T10:12:52Z) - Ontologies in Digital Twins: A Systematic Literature Review [4.338144682969141]
Digital Twins (DT) facilitate monitoring and reasoning processes in cyber-physical systems.
Recent studies address the relevance of knowledge and graphs in the context of DTs.
There is no comprehensive analysis of how semantic technologies are utilized within DTs.
arXiv Detail & Related papers (2023-08-29T09:52:21Z) - Evidential Temporal-aware Graph-based Social Event Detection via
Dempster-Shafer Theory [76.4580340399321]
We propose ETGNN, a novel Evidential Temporal-aware Graph Neural Network.
We construct view-specific graphs whose nodes are the texts and edges are determined by several types of shared elements respectively.
Considering the view-specific uncertainty, the representations of all views are converted into mass functions through evidential deep learning (EDL) neural networks.
arXiv Detail & Related papers (2022-05-24T16:22:40Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - Digital Twins: State of the Art Theory and Practice, Challenges, and
Open Research Questions [62.67593386796497]
This work explores the various DT features and current approaches, the shortcomings and reasons behind the delay in the implementation and adoption of digital twin.
The major reasons for this delay are the lack of a universal reference framework, domain dependence, security concerns of shared data, reliance of digital twin on other technologies, and lack of quantitative metrics.
arXiv Detail & Related papers (2020-11-02T19:08:49Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.