FeedQUAC: Quick Unobtrusive AI-Generated Commentary
- URL: http://arxiv.org/abs/2504.16416v1
- Date: Wed, 23 Apr 2025 04:48:00 GMT
- Title: FeedQUAC: Quick Unobtrusive AI-Generated Commentary
- Authors: Tao Long, Kendra Wannamaker, Jo Vermeulen, George Fitzmaurice, Justin Matejka,
- Abstract summary: We introduce FeedQUAC, a design companion that delivers real-time AI-generated commentary from a variety of perspectives.<n>We discuss the role of AI feedback, its strengths and limitations, and how to integrate it into existing design.<n>Our findings suggest that ambient interaction is a valuable consideration for both the design and evaluation of future creativity support systems.
- Score: 8.057486493973304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Design thrives on feedback. However, gathering constant feedback throughout the design process can be labor-intensive and disruptive. We explore how AI can bridge this gap by providing effortless, ambient feedback. We introduce FeedQUAC, a design companion that delivers real-time AI-generated commentary from a variety of perspectives through different personas. A design probe study with eight participants highlights how designers can leverage quick yet ambient AI feedback to enhance their creative workflows. Participants highlight benefits such as convenience, playfulness, confidence boost, and inspiration from this lightweight feedback agent, while suggesting additional features, like chat interaction and context curation. We discuss the role of AI feedback, its strengths and limitations, and how to integrate it into existing design workflows while balancing user involvement. Our findings also suggest that ambient interaction is a valuable consideration for both the design and evaluation of future creativity support systems.
Related papers
- AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Empowering Clients: Transformation of Design Processes Due to Generative AI [1.4003044924094596]
The study reveals that AI can disrupt the ideation phase by enabling clients to engage in the design process through rapid visualization of their own ideas.
Our study shows that while AI can provide valuable feedback on designs, it might fail to generate such designs, allowing for interesting connections to foundations in computer science.
Our study also reveals that there is uncertainty among architects about the interpretative sovereignty of architecture and loss of meaning and identity when AI increasingly takes over authorship in the design process.
arXiv Detail & Related papers (2024-11-22T16:48:15Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - The Future of Open Human Feedback [65.2188596695235]
We bring together interdisciplinary experts to assess the opportunities and challenges to realizing an open ecosystem of human feedback for AI.
We first look for successful practices in peer production, open source, and citizen science communities.
We end by envisioning the components needed to underpin a sustainable and open human feedback ecosystem.
arXiv Detail & Related papers (2024-08-15T17:59:14Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Human-in-the-loop Fairness: Integrating Stakeholder Feedback to Incorporate Fairness Perspectives in Responsible AI [4.0247545547103325]
Fairness is a growing concern for high-risk decision-making using Artificial Intelligence (AI)
There is no universally accepted fairness measure, fairness is context-dependent, and there might be conflicting perspectives on what is considered fair.
Our work follows an approach where stakeholders can give feedback on specific decision instances and their outcomes with respect to their fairness.
arXiv Detail & Related papers (2023-12-13T11:17:29Z) - The role of interface design on prompt-mediated creativity in Generative
AI [0.0]
We analyze more than 145,000 prompts from two Generative AI platforms.
We find that users exhibit a tendency towards exploration of new topics over exploitation of concepts visited previously.
arXiv Detail & Related papers (2023-11-30T22:33:34Z) - Writer-Defined AI Personas for On-Demand Feedback Generation [32.19315306717165]
We propose a concept that generates on-demand feedback, based on writer-defined AI personas of any target audience.
This work contributes to the vision of supporting writers with AI by expanding the socio-technical perspective in AI tool design.
arXiv Detail & Related papers (2023-09-19T08:49:35Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese
Neural Network concept to non-technical users [8.939421900877742]
This work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users.
Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch.
arXiv Detail & Related papers (2023-04-04T08:26:54Z) - DeFINE: Delayed Feedback based Immersive Navigation Environment for
Studying Goal-Directed Human Navigation [10.7197371210731]
Delayed Feedback based Immersive Navigation Environment (DeFINE) is a framework that allows for easy creation and administration of navigation tasks.
DeFINE has a built-in capability to provide performance feedback to participants during an experiment.
arXiv Detail & Related papers (2020-03-06T11:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.