Justice in interaction design: preventing manipulation in interfaces
- URL: http://arxiv.org/abs/2204.06821v1
- Date: Thu, 14 Apr 2022 08:45:06 GMT
- Title: Justice in interaction design: preventing manipulation in interfaces
- Authors: Lorena Sanchez Chamorro, Kerstin Bongard-Blanchy and Vincent Koenig
- Abstract summary: Designers incorporate values in the design process that raise risks for vulnerable groups.
Persuading in user interfaces can quickly turn into manipulation and become potentially harmful for those groups in the realm of intellectual disabilities, class, or health.
Here we explain how it can be used proactively to inform designers' decisions when it comes to evaluating justice in their designs preventing the risk of manipulation.
- Score: 0.5524804393257919
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Designers incorporate values in the design process that raise risks for
vulnerable groups. Persuasion in user interfaces can quickly turn into
manipulation and become potentially harmful for those groups in the realm of
intellectual disabilities, class, or health, requiring proactive responsibility
approaches in design. Here we introduce the Capability Sensitive Design
Approach and explain how it can be used proactively to inform designers'
decisions when it comes to evaluating justice in their designs preventing the
risk of manipulation.
Related papers
- Neural Transparency: Mechanistic Interpretability Interfaces for Anticipating Model Behaviors for Personalized AI [9.383958408772694]
We introduce an interface that enables neural transparency by exposing language model internals during chatbots design.<n>Our approach extracts behavioral trait vectors by computing differences in neural activations between contrastive system prompts that elicit opposing behaviors.<n>This work offers a path for how interpretability can be operationalized for non-technical users, establishing a foundation for safer, more aligned human-AI interactions.
arXiv Detail & Related papers (2025-10-31T20:03:52Z) - Worker Discretion Advised: Co-designing Risk Disclosure in Crowdsourced Responsible AI (RAI) Content Work [12.492380198885295]
Responsible AI (RAI) content work often exposes crowd workers to potentially harmful content.<n>We conduct co-design sessions with 29 task designers, workers, and platform representatives.<n>We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices.
arXiv Detail & Related papers (2025-09-15T17:05:34Z) - Interactive Reasoning: Visualizing and Controlling Chain-of-Thought Reasoning in Large Language Models [54.85405423240165]
We introduce Interactive Reasoning, an interaction design that visualizes chain-of-thought outputs as a hierarchy of topics.<n>We implement interactive reasoning in Hippo, a prototype for AI-assisted decision making in the face of uncertain trade-offs.
arXiv Detail & Related papers (2025-06-30T10:00:43Z) - Positioning AI Tools to Support Online Harm Reduction Practice: Applications and Design Directions [9.153768162198075]
Large Language Models (LLMs) present a novel opportunity to enhance information provision.<n>This paper investigates how LLMs can be responsibly designed to support the information needs of People Who Use Drugs (PWUD)
arXiv Detail & Related papers (2025-06-28T16:15:47Z) - Co-CoT: A Prompt-Based Framework for Collaborative Chain-of-Thought Reasoning [0.0]
We propose an Interactive Chain-of-Thought (CoT) Framework that enhances human-centered explainability and responsible AI usage.
The framework decomposes reasoning into clearly defined blocks that users can inspect, modify, and re-execute.
Ethical transparency is ensured through explicit metadata disclosure, built-in bias checkpoint functionality, and privacy-preserving safeguards.
arXiv Detail & Related papers (2025-04-23T20:48:09Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Beyond SHAP and Anchors: A large-scale experiment on how developers struggle to design meaningful end-user explanations [11.20554074076788]
Modern machine learning produces models that are impossible for users or developers to fully understand.<n> Transparency and explainability methods aim to provide some help in understanding models.<n>Emerging guidelines and regulations set goals but may not provide effective actionable guidance to developers.
arXiv Detail & Related papers (2025-01-28T23:54:00Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models [58.065255696601604]
We use compositional property of diffusion models, which allows to leverage multiple prompts in a single image generation.
We argue that it is essential to consider all possible approaches to image generation with diffusion models that can be employed by an adversary.
arXiv Detail & Related papers (2024-04-21T16:35:16Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - Proxy Design: A Method for Involving Proxy Users to Speak on Behalf of
Vulnerable or Unreachable Users in Co-Design [0.23967405016776386]
proxy design is outlined as a method for involving a user group as proxy users to speak on behalf of a group that is difficult to reach.
We present a design ethnography spanning three years at a cancer rehabilitation clinic, where digital artifacts were designed to be used collaboratively by nurses and patients.
arXiv Detail & Related papers (2023-10-27T16:24:54Z) - Human-centered trust framework: An HCI perspective [1.6344851071810074]
The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI)
We propose a framework to guide non-experts to unlock the full potential of user trust in AI design.
arXiv Detail & Related papers (2023-05-05T06:15:32Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z) - Towards a Responsible AI Development Lifecycle: Lessons From Information
Security [0.0]
We propose a framework for responsibly developing artificial intelligence systems.
In particular, we propose leveraging the concepts of threat modeling, design review, penetration testing, and incident response.
arXiv Detail & Related papers (2022-03-06T13:03:58Z) - Mitigating Negative Side Effects via Environment Shaping [27.400267388362654]
Agents operating in unstructured environments often produce negative side effects (NSE)
We present an algorithm to solve this problem and analyze its theoretical properties.
Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE, without affecting the agent's ability to complete its assigned task.
arXiv Detail & Related papers (2021-02-13T22:15:00Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.