What Makes a Dark Pattern... Dark? Design Attributes, Normative
Considerations, and Measurement Methods
- URL: http://arxiv.org/abs/2101.04843v1
- Date: Wed, 13 Jan 2021 02:52:12 GMT
- Title: What Makes a Dark Pattern... Dark? Design Attributes, Normative
Considerations, and Measurement Methods
- Authors: Arunesh Mathur, Jonathan Mayer, Mihir Kshirsagar
- Abstract summary: There is a rapidly growing literature on dark patterns, user interface designs that researchers deem problematic.
But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern?
We show how future research on dark patterns can go beyond subjective criticism of user interface designs.
- Score: 13.750624267664158
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is a rapidly growing literature on dark patterns, user interface
designs -- typically related to shopping or privacy -- that researchers deem
problematic. Recent work has been predominantly descriptive, documenting and
categorizing objectionable user interfaces. These contributions have been
invaluable in highlighting specific designs for researchers and policymakers.
But the current literature lacks a conceptual foundation: What makes a user
interface a dark pattern? Why are certain designs problematic for users or
society?
We review recent work on dark patterns and demonstrate that the literature
does not reflect a singular concern or consistent definition, but rather, a set
of thematically related considerations. Drawing from scholarship in psychology,
economics, ethics, philosophy, and law, we articulate a set of normative
perspectives for analyzing dark patterns and their effects on individuals and
society. We then show how future research on dark patterns can go beyond
subjective criticism of user interface designs and apply empirical methods
grounded in normative perspectives.
Related papers
- Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Integrating Dark Pattern Taxonomies [0.0]
Malicious and explotitative design has expanded to multiple domains in the past 10 years.
By leaning on network analysis tools and methods, this paper synthesizes existing elements through as a directed graph.
In doing so, the interconnectedness of Dark patterns can be more clearly revealed via community detection.
arXiv Detail & Related papers (2024-02-26T17:26:31Z) - Why is the User Interface a Dark Pattern? : Explainable Auto-Detection
and its Analysis [1.4474137122906163]
Dark patterns are deceptive user interface designs for online services that make users behave in unintended ways.
We study interpretable dark pattern auto-detection, that is, why a particular user interface is detected as having dark patterns.
Our findings may prevent users from being manipulated by dark patterns, and aid in the construction of more equitable internet services.
arXiv Detail & Related papers (2023-12-30T03:53:58Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Beyond Dark Patterns: A Concept-Based Framework for Ethical Software
Design [1.2535148942290433]
We present a framework grounded in positive expected behavior against which deviations can be judged.
We define a design as dark when its concepts violate users' expectations, and benefit the application provider at the user's expense.
arXiv Detail & Related papers (2023-10-03T20:58:02Z) - Temporal Analysis of Dark Patterns: A Case Study of a User's Odyssey to
Conquer Prime Membership Cancellation through the "Iliad Flow" [22.69068051865837]
We present a case study of Amazon Prime's "Iliad Flow" to illustrate the interplay of dark patterns across a user journey.
We use this case study to lay the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP)
arXiv Detail & Related papers (2023-09-18T10:12:52Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Linguistic Dead-Ends and Alphabet Soup: Finding Dark Patterns in
Japanese Apps [10.036312061637764]
We analyzed 200 popular mobile apps in the Japanese market.
We found that most apps had dark patterns, with an average of 3.9 per app.
We identified a new class of dark pattern: "Linguistic Dead-Ends" in the forms of "Untranslation" and "Alphabet Soup"
arXiv Detail & Related papers (2023-04-22T08:22:32Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - COFFEE: Counterfactual Fairness for Personalized Text Generation in
Explainable Recommendation [56.520470678876656]
bias inherent in user written text can associate different levels of linguistic quality with users' protected attributes.
We introduce a general framework to achieve measure-specific counterfactual fairness in explanation generation.
arXiv Detail & Related papers (2022-10-14T02:29:10Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.