Linguistic Dead-Ends and Alphabet Soup: Finding Dark Patterns in
Japanese Apps
- URL: http://arxiv.org/abs/2304.12811v1
- Date: Sat, 22 Apr 2023 08:22:32 GMT
- Title: Linguistic Dead-Ends and Alphabet Soup: Finding Dark Patterns in
Japanese Apps
- Authors: Shun Hidaka, Sota Kobuki, Mizuki Watanabe, Katie Seaborn
- Abstract summary: We analyzed 200 popular mobile apps in the Japanese market.
We found that most apps had dark patterns, with an average of 3.9 per app.
We identified a new class of dark pattern: "Linguistic Dead-Ends" in the forms of "Untranslation" and "Alphabet Soup"
- Score: 10.036312061637764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dark patterns are deceptive and malicious properties of user interfaces that
lead the end-user to do something different from intended or expected. While
now a key topic in critical computing, most work has been conducted in Western
contexts. Japan, with its booming app market, is a relatively uncharted context
that offers culturally- and linguistically-sensitive differences in design
standards, contexts of use, values, and language, all of which could influence
the presence and expression of dark patterns. In this work, we analyzed 200
popular mobile apps in the Japanese market. We found that most apps had dark
patterns, with an average of 3.9 per app. We also identified a new class of
dark pattern: "Linguistic Dead-Ends" in the forms of "Untranslation" and
"Alphabet Soup." We outline the implications for design and research practice,
especially for future cross-cultural research on dark patterns.
Related papers
- See It from My Perspective: Diagnosing the Western Cultural Bias of Large Vision-Language Models in Image Understanding [78.88461026069862]
Vision-language models (VLMs) can respond to queries about images in many languages.
We present a novel investigation that demonstrates and localizes Western bias in image understanding.
arXiv Detail & Related papers (2024-06-17T15:49:51Z) - Deceptive, Disruptive, No Big Deal: Japanese People React to Simulated Dark Commercial Patterns [20.0118117663204]
We report on the first user study involving Japanese people experiencing a mock shopping website injected with simulated DPs.
We found that Alphabet Soup and Misleading Reference Pricing were the most deceptive and least noticeable.
We urge for more human participant research and ideally collaborations with industry to assess real designs in the wild.
arXiv Detail & Related papers (2024-05-14T00:35:13Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - Why is the User Interface a Dark Pattern? : Explainable Auto-Detection
and its Analysis [1.4474137122906163]
Dark patterns are deceptive user interface designs for online services that make users behave in unintended ways.
We study interpretable dark pattern auto-detection, that is, why a particular user interface is detected as having dark patterns.
Our findings may prevent users from being manipulated by dark patterns, and aid in the construction of more equitable internet services.
arXiv Detail & Related papers (2023-12-30T03:53:58Z) - DarkBERT: A Language Model for the Dark Side of the Internet [26.28825428391132]
We introduce DarkBERT, a language model pretrained on Dark Web data.
We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web.
Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.
arXiv Detail & Related papers (2023-05-15T12:23:10Z) - ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for
Scene Text Spotting [121.11880210592497]
We argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input.
We propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting.
arXiv Detail & Related papers (2022-11-19T03:50:33Z) - A New Generation of Perspective API: Efficient Multilingual
Character-level Transformers [66.9176610388952]
We present the fundamentals behind the next version of the Perspective API from Google Jigsaw.
At the heart of the approach is a single multilingual token-free Charformer model.
We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings.
arXiv Detail & Related papers (2022-02-22T20:55:31Z) - Exploiting BERT For Multimodal Target SentimentClassification Through
Input Space Translation [75.82110684355979]
We introduce a two-stream model that translates images in input space using an object-aware transformer.
We then leverage the translation to construct an auxiliary sentence that provides multimodal information to a language model.
We achieve state-of-the-art performance on two multimodal Twitter datasets.
arXiv Detail & Related papers (2021-08-03T18:02:38Z) - What Makes a Dark Pattern... Dark? Design Attributes, Normative
Considerations, and Measurement Methods [13.750624267664158]
There is a rapidly growing literature on dark patterns, user interface designs that researchers deem problematic.
But the current literature lacks a conceptual foundation: What makes a user interface a dark pattern?
We show how future research on dark patterns can go beyond subjective criticism of user interface designs.
arXiv Detail & Related papers (2021-01-13T02:52:12Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.