Beyond Dark Patterns: A Concept-Based Framework for Ethical Software
Design
- URL: http://arxiv.org/abs/2310.02432v2
- Date: Mon, 4 Mar 2024 04:59:43 GMT
- Title: Beyond Dark Patterns: A Concept-Based Framework for Ethical Software
Design
- Authors: Evan Caragay, Katherine Xiong, Jonathan Zong, Daniel Jackson
- Abstract summary: We present a framework grounded in positive expected behavior against which deviations can be judged.
We define a design as dark when its concepts violate users' expectations, and benefit the application provider at the user's expense.
- Score: 1.2535148942290433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current dark pattern research tells designers what not to do, but how do they
know what to do? In contrast to prior approaches that focus on patterns to
avoid and their underlying principles, we present a framework grounded in
positive expected behavior against which deviations can be judged. To
articulate this expected behavior, we use concepts -- abstract units of
functionality that compose applications. We define a design as dark when its
concepts violate users' expectations, and benefit the application provider at
the user's expense. Though user expectations can differ, users tend to develop
common expectations as they encounter the same concepts across multiple
applications, which we can record in a concept catalog as standard concepts. We
evaluate our framework and concept catalog through three studies, illustrating
their ability to describe existing dark patterns, evaluate nuanced designs, and
document common application functionality.
Related papers
- How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization? [91.49559116493414]
We propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM)
It can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner.
Experiments validate that our CIDM surpasses existing custom diffusion models.
arXiv Detail & Related papers (2024-10-23T06:47:29Z) - Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models [24.851041038347784]
characterization allows us to use our framework to audit models and prompt-datasets.
We implement Concept2Concept as an open-source interactive visualization tool facilitating use by non-technical end-users.
arXiv Detail & Related papers (2024-10-06T21:42:53Z) - ConceptMix: A Compositional Image Generation Benchmark with Controllable Difficulty [52.15933752463479]
ConceptMix is a scalable, controllable, and customizable benchmark.
It automatically evaluates compositional generation ability of Text-to-Image (T2I) models.
It reveals that the performance of several models, especially open models, drops dramatically with increased k.
arXiv Detail & Related papers (2024-08-26T15:08:12Z) - Assessing the Variety of a Concept Space Using an Unbiased Estimate of Rao's Quadratic Index [0.0]
'Variety' is one of the parameters by which one can quantify the breadth of a concept space explored by the designers.
This article elaborates on and critically examines the existing variety metrics from the engineering design literature.
A new distance-based variety metric is proposed, along with a prescriptive framework to support the assessment process.
arXiv Detail & Related papers (2024-08-01T16:25:54Z) - MyVLM: Personalizing VLMs for User-Specific Queries [78.33252556805931]
We take a first step toward the personalization of vision-language models, enabling them to learn and reason over user-provided concepts.
To effectively recognize a variety of user-specific concepts, we augment the VLM with external concept heads that function as toggles for the model.
Having recognized the concept, we learn a new concept embedding in the intermediate feature space of the VLM.
This embedding is tasked with guiding the language model to naturally integrate the target concept in its generated response.
arXiv Detail & Related papers (2024-03-21T17:51:01Z) - Simple Mechanisms for Representing, Indexing and Manipulating Concepts [46.715152257557804]
We will argue that learning a concept could be done by looking at its moment statistics matrix to generate a concrete representation or signature of that concept.
When the concepts are intersected', signatures of the concepts can be used to find a common theme across a number of related intersected' concepts.
arXiv Detail & Related papers (2023-10-18T17:54:29Z) - ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image
Diffusion Models [79.10890337599166]
We introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts and 33K composite text prompts.
We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions.
Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome.
arXiv Detail & Related papers (2023-06-07T18:00:38Z) - Agile Modeling: From Concept to Classifier in Minutes [35.03003329814567]
We introduce the problem of Agile Modeling: the process of turning any subjective visual concept into a computer vision model.
We show through a user study that users can create classifiers with minimal effort under 30 minutes.
We compare this user driven process with the traditional crowdsourcing paradigm and find that the crowd's notion often differs from that of the user's.
arXiv Detail & Related papers (2023-02-25T01:18:09Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Separating Skills and Concepts for Novel Visual Question Answering [66.46070380927372]
Generalization to out-of-distribution data has been a problem for Visual Question Answering (VQA) models.
"Skills" are visual tasks, such as counting or attribute recognition, and are applied to "concepts" mentioned in the question.
We present a novel method for learning to compose skills and concepts that separates these two factors implicitly within a model.
arXiv Detail & Related papers (2021-07-19T18:55:10Z) - What Makes a Dark Pattern... Dark? Design Attributes, Normative
Considerations, and Measurement Methods [13.750624267664158]
There is a rapidly growing literature on dark patterns, user interface designs that researchers deem problematic.
But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern?
We show how future research on dark patterns can go beyond subjective criticism of user interface designs.
arXiv Detail & Related papers (2021-01-13T02:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.