AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces
- URL: http://arxiv.org/abs/2303.06782v1
- Date: Sun, 12 Mar 2023 23:46:04 GMT
- Title: AidUI: Toward Automated Recognition of Dark Patterns in User Interfaces
- Authors: SM Hasan Mansur and Sabiha Salma and Damilola Awofisayo and Kevin
Moran
- Abstract summary: UI dark patterns can lead end-users toward (unknowingly) taking actions that they may not have intended.
We introduce AidUI, a novel approach that uses computer vision and natural language processing techniques to recognize ten unique UI dark patterns.
AidUI achieves an overall precision of 0.66, recall of 0.67, F1-score of 0.65 in detecting dark pattern instances, and is able to localize detected patterns with an IoU score of 0.84.
- Score: 6.922187804798161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Past studies have illustrated the prevalence of UI dark patterns, or user
interfaces that can lead end-users toward (unknowingly) taking actions that
they may not have intended. Such deceptive UI designs can result in adverse
effects on end users, such as oversharing personal information or financial
loss. While significant research progress has been made toward the development
of dark pattern taxonomies, developers and users currently lack guidance to
help recognize, avoid, and navigate these often subtle design motifs. However,
automated recognition of dark patterns is a challenging task, as the
instantiation of a single type of pattern can take many forms, leading to
significant variability.
In this paper, we take the first step toward understanding the extent to
which common UI dark patterns can be automatically recognized in modern
software applications. To do this, we introduce AidUI, a novel automated
approach that uses computer vision and natural language processing techniques
to recognize a set of visual and textual cues in application screenshots that
signify the presence of ten unique UI dark patterns, allowing for their
detection, classification, and localization. To evaluate our approach, we have
constructed ContextDP, the current largest dataset of fully-localized UI dark
patterns that spans 175 mobile and 83 web UI screenshots containing 301 dark
pattern instances. The results of our evaluation illustrate that \AidUI
achieves an overall precision of 0.66, recall of 0.67, F1-score of 0.65 in
detecting dark pattern instances, reports few false positives, and is able to
localize detected patterns with an IoU score of ~0.84. Furthermore, a
significant subset of our studied dark patterns can be detected quite reliably
(F1 score of over 0.82), and future research directions may allow for improved
detection of additional patterns.
Related papers
- Detecting Deceptive Dark Patterns in E-commerce Platforms [0.0]
Dark patterns are deceptive user interfaces employed by e-commerce websites to manipulate user's behavior in a way that benefits the website, often unethically.
Existing solutions include UIGuard, which uses computer vision and natural language processing, and approaches that categorize dark patterns based on detectability or utilize machine learning models trained on datasets.
We propose combining web scraping techniques with fine-tuned BERT language models and generative capabilities to identify dark patterns, including outliers.
arXiv Detail & Related papers (2024-05-27T16:32:40Z) - Why is the User Interface a Dark Pattern? : Explainable Auto-Detection
and its Analysis [1.4474137122906163]
Dark patterns are deceptive user interface designs for online services that make users behave in unintended ways.
We study interpretable dark pattern auto-detection, that is, why a particular user interface is detected as having dark patterns.
Our findings may prevent users from being manipulated by dark patterns, and aid in the construction of more equitable internet services.
arXiv Detail & Related papers (2023-12-30T03:53:58Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Dark patterns in e-commerce: a dataset and its baseline evaluations [0.14680035572775535]
We constructed a dataset for dark pattern detection with state-of-the-art machine learning methods.
As a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975 with RoBERTa.
arXiv Detail & Related papers (2022-11-12T01:53:49Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z) - Automated detection of dark patterns in cookie banners: how to do it
poorly and why it is hard to do it any other way [7.2834950390171205]
A dataset of cookie banners of 300 news websites was used to train a prediction model that does exactly that.
The accuracy of the trained model is promising, but allows a lot of room for improvement.
We provide an in-depth analysis of the interdisciplinary challenges that automated dark pattern detection poses to artificial intelligence.
arXiv Detail & Related papers (2022-04-21T12:10:27Z) - Exploiting Multi-Object Relationships for Detecting Adversarial Attacks
in Complex Scenes [51.65308857232767]
Vision systems that deploy Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples.
Recent research has shown that checking the intrinsic consistencies in the input data is a promising way to detect adversarial attacks.
We develop a novel approach to perform context consistency checks using language models.
arXiv Detail & Related papers (2021-08-19T00:52:10Z) - User-Guided Domain Adaptation for Rapid Annotation from User
Interactions: A Study on Pathological Liver Segmentation [49.96706092808873]
Mask-based annotation of medical images, especially for 3D data, is a bottleneck in developing reliable machine learning models.
We propose the user-guided domain adaptation (UGDA) framework, which uses prediction-based adversarial domain adaptation (PADA) to model the combined distribution of UIs and mask predictions.
We show UGDA can retain this state-of-the-art performance even when only seeing a fraction of available UIs.
arXiv Detail & Related papers (2020-09-05T04:24:58Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.