Why is the User Interface a Dark Pattern? : Explainable Auto-Detection
and its Analysis
- URL: http://arxiv.org/abs/2401.04119v1
- Date: Sat, 30 Dec 2023 03:53:58 GMT
- Title: Why is the User Interface a Dark Pattern? : Explainable Auto-Detection
and its Analysis
- Authors: Yuki Yada, Tsuneo Matsumoto, Fuyuko Kido, Hayato Yamana
- Abstract summary: Dark patterns are deceptive user interface designs for online services that make users behave in unintended ways.
We study interpretable dark pattern auto-detection, that is, why a particular user interface is detected as having dark patterns.
Our findings may prevent users from being manipulated by dark patterns, and aid in the construction of more equitable internet services.
- Score: 1.4474137122906163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dark patterns are deceptive user interface designs for online services that
make users behave in unintended ways. Dark patterns, such as privacy invasion,
financial loss, and emotional distress, can harm users. These issues have been
the subject of considerable debate in recent years. In this paper, we study
interpretable dark pattern auto-detection, that is, why a particular user
interface is detected as having dark patterns. First, we trained a model using
transformer-based pre-trained language models, BERT, on a text-based dataset
for the automatic detection of dark patterns in e-commerce. Then, we applied
post-hoc explanation techniques, including local interpretable model agnostic
explanation (LIME) and Shapley additive explanations (SHAP), to the trained
model, which revealed which terms influence each prediction as a dark pattern.
In addition, we extracted and analyzed terms that affected the dark patterns.
Our findings may prevent users from being manipulated by dark patterns, and aid
in the construction of more equitable internet services. Our code is available
at https://github.com/yamanalab/why-darkpattern.
Related papers
- Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance [68.56701216210617]
In-principle, one would expect models to adapt to the user context better after instruction finetuning.
We observe a surprising failure mode: during instruction tuning, the context reliance under knowledge conflicts initially increases as expected, but then gradually decreases.
arXiv Detail & Related papers (2024-10-14T17:57:09Z) - Detecting Deceptive Dark Patterns in E-commerce Platforms [0.0]
Dark patterns are deceptive user interfaces employed by e-commerce websites to manipulate user's behavior in a way that benefits the website, often unethically.
Existing solutions include UIGuard, which uses computer vision and natural language processing, and approaches that categorize dark patterns based on detectability or utilize machine learning models trained on datasets.
We propose combining web scraping techniques with fine-tuned BERT language models and generative capabilities to identify dark patterns, including outliers.
arXiv Detail & Related papers (2024-05-27T16:32:40Z) - Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to
Conquer Prime Membership Cancellation through the "Iliad Flow" [22.69068051865837]
We present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey.
We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP)
arXiv Detail & Related papers (2023-09-18T10:12:52Z) - Are aligned neural networks adversarially aligned? [93.91072860401856]
adversarial users can construct inputs which circumvent attempts at alignment.
We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models.
We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.
arXiv Detail & Related papers (2023-06-26T17:18:44Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Dark patterns in e-commerce: a dataset and its baseline evaluations [0.14680035572775535]
We constructed a dataset for dark pattern detection with state-of-the-art machine learning methods.
As a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975 with RoBERTa.
arXiv Detail & Related papers (2022-11-12T01:53:49Z) - Automated detection of dark patterns in cookie banners: how to do it
poorly and why it is hard to do it any other way [7.2834950390171205]
A dataset of cookie banners of 300 news websites was used to train a prediction model that does exactly that.
The accuracy of the trained model is promising, but allows a lot of room for improvement.
We provide an in-depth analysis of the interdisciplinary challenges that automated dark pattern detection poses to artificial intelligence.
arXiv Detail & Related papers (2022-04-21T12:10:27Z) - How to Robustify Black-Box ML Models? A Zeroth-Order Optimization
Perspective [74.47093382436823]
We address the problem of black-box defense: How to robustify a black-box model using just input queries and output feedback?
We propose a general notion of defensive operation that can be applied to black-box models, and design it through the lens of denoised smoothing (DS)
We empirically show that ZO-AE-DS can achieve improved accuracy, certified robustness, and query complexity over existing baselines.
arXiv Detail & Related papers (2022-03-27T03:23:32Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - What Makes a Dark Pattern... Dark? Design Attributes, Normative
Considerations, and Measurement Methods [13.750624267664158]
There is a rapidly growing literature on dark patterns, user interface designs that researchers deem problematic.
But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern?
We show how future research on dark patterns can go beyond subjective criticism of user interface designs.
arXiv Detail & Related papers (2021-01-13T02:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.