fCrit: A Visual Explanation System for Furniture Design Creative Support
- URL: http://arxiv.org/abs/2508.12416v1
- Date: Sun, 17 Aug 2025 16:03:44 GMT
- Title: fCrit: A Visual Explanation System for Furniture Design Creative Support
- Authors: Vuong Nguyen, Gabriel Vigliensoni,
- Abstract summary: fCrit is a dialogue-based AI system designed to critique furniture design with a focus on explainability.<n>We argue that explainability in the arts should not only make AI reasoning transparent but also adapt to the ways users think and talk about their designs.
- Score: 1.3581810800092389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce fCrit, a dialogue-based AI system designed to critique furniture design with a focus on explainability. Grounded in reflective learning and formal analysis, fCrit employs a multi-agent architecture informed by a structured design knowledge base. We argue that explainability in the arts should not only make AI reasoning transparent but also adapt to the ways users think and talk about their designs. We demonstrate how fCrit supports this process by tailoring explanations to users' design language and cognitive framing. This work contributes to Human-Centered Explainable AI (HCXAI) in creative practice, advancing domain-specific methods for situated, dialogic, and visually grounded AI support.
Related papers
- Draw2Learn: A Human-AI Collaborative Tool for Drawing-Based Science Learning [0.0]
Drawing supports learning by externalizing mental models, but providing timely feedback at scale remains challenging.<n>We present Draw2Learn, a system that explores how AI can act as a supportive teammate during drawing-based learning.
arXiv Detail & Related papers (2026-02-02T00:06:08Z) - Designing Gaze Analytics for ELA Instruction: A User-Centered Dashboard with Conversational AI Support [3.741199946315248]
Eye-tracking offers rich insights into student cognition and engagement.<n>However, it remains underutilized in classroom-facing educational technology.<n>We present the iterative design and evaluation of a gaze-based learning analytics dashboard for English Language Arts.
arXiv Detail & Related papers (2025-09-03T22:01:14Z) - Rhetorical XAI: Explaining AI's Benefits as well as its Use via Rhetorical Design [3.386401892906348]
This paper explores potential benefits of incorporating Rhetorical Design into the design of Explainable Artificial Intelligence (XAI) systems.<n>Rhetoric Design offers a useful framework to analyze the communicative role of explanations between AI systems and users.
arXiv Detail & Related papers (2025-05-14T23:57:17Z) - From Fragment to One Piece: A Survey on AI-Driven Graphic Design [19.042522345775193]
The survey covers various subtasks, including visual element perception and generation, aesthetic and semantic understanding, layout analysis, and generation.<n>Despite significant progress, challenges remain to understanding human intent, ensuring interpretability, and maintaining control over multilayered compositions.
arXiv Detail & Related papers (2025-03-24T13:05:09Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese
Neural Network concept to non-technical users [8.939421900877742]
This work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users.
Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch.
arXiv Detail & Related papers (2023-04-04T08:26:54Z) - Designerly Understanding: Information Needs for Model Transparency to
Support Design Ideation for AI-Powered User Experience [42.73738624139124]
Designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials.
This limits their ability to ideate and make decisions about whether, where, and how to use AI.
Our study highlights the pivotal role that UX designers can play in Responsible AI.
arXiv Detail & Related papers (2023-02-21T02:06:24Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Explainability Case Studies [2.2872132127037963]
Explainability is one of the key ethical concepts in the design of AI systems.
We present a set of case studies of a hypothetical AI-enabled product, which serves as a pedagogical tool to empower product designers, developers, students, and educators to develop a holistic explainability strategy for their own products.
arXiv Detail & Related papers (2020-09-01T05:54:15Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.