Evaluating Node-tree Interfaces for AI Explainability
- URL: http://arxiv.org/abs/2510.06457v1
- Date: Tue, 07 Oct 2025 20:48:08 GMT
- Title: Evaluating Node-tree Interfaces for AI Explainability
- Authors: Lifei Wang, Natalie Friedman, Chengchao Zhu, Zeshu Zhu, S. Joy Mountford,
- Abstract summary: This study evaluates user experiences with two distinct AI interfaces - node-tree interfaces and chatbots.<n>Our node-tree interface visually structures AI-generated responses into hierarchically organized, interactive nodes.<n>Our findings suggest that AI interfaces capable of switching between structured visualizations and conversational formats can significantly enhance transparency and user confidence in AI-powered systems.
- Score: 0.5437050212139087
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As large language models (LLMs) become ubiquitous in workplace tools and decision-making processes, ensuring explainability and fostering user trust are critical. Although advancements in LLM engineering continue, human-centered design is still catching up, particularly when it comes to embedding transparency and trust into AI interfaces. This study evaluates user experiences with two distinct AI interfaces - node-tree interfaces and chatbot interfaces - to assess their performance in exploratory, follow-up inquiry, decision-making, and problem-solving tasks. Our design-driven approach introduces a node-tree interface that visually structures AI-generated responses into hierarchically organized, interactive nodes, allowing users to navigate, refine, and follow up on complex information. In a comparative study with n=20 business users, we observed that while the chatbot interface effectively supports linear, step-by-step queries, it is the node-tree interface that enhances brainstorming. Quantitative and qualitative findings indicate that node-tree interfaces not only improve task performance and decision-making support but also promote higher levels of user trust by preserving context. Our findings suggest that adaptive AI interfaces capable of switching between structured visualizations and conversational formats based on task requirements can significantly enhance transparency and user confidence in AI-powered systems. This work contributes actionable insights to the fields of human-robot interaction and AI design, particularly for enterprise applications where trust-building is critical for teams.
Related papers
- Steering LLMs via Scalable Interactive Oversight [74.12746881843044]
Large Language Models increasingly automate complex, long-horizon tasks such as emphvibe coding, a supervision gap has emerged.<n>It presents a critical challenge in scalable oversight: enabling humans to responsibly steer AI systems on tasks that surpass their own ability to specify or verify.
arXiv Detail & Related papers (2026-02-04T04:52:00Z) - Bridging Gulfs in UI Generation through Semantic Guidance [16.245249868262178]
We develop a system that enables users to specify semantics, visualize relationships, and extract how semantics are reflected in generated UIs.<n>A comparative user study suggests that our approach enhances users' perceived control over intent expression, outcome interpretation, and facilitates more predictable, iterative refinement.
arXiv Detail & Related papers (2026-01-27T04:01:53Z) - AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support [0.514825619161626]
Current AI systems remain largely passive due to an overreliance on explainability-centric designs.<n> transitioning AI to an active teammate requires adaptive, context-aware interactions.
arXiv Detail & Related papers (2026-01-26T19:18:50Z) - Neural Transparency: Mechanistic Interpretability Interfaces for Anticipating Model Behaviors for Personalized AI [9.383958408772694]
We introduce an interface that enables neural transparency by exposing language model internals during chatbots design.<n>Our approach extracts behavioral trait vectors by computing differences in neural activations between contrastive system prompts that elicit opposing behaviors.<n>This work offers a path for how interpretability can be operationalized for non-technical users, establishing a foundation for safer, more aligned human-AI interactions.
arXiv Detail & Related papers (2025-10-31T20:03:52Z) - Generative Interfaces for Language Models [70.25765232527762]
We propose a paradigm in which large language models (LLMs) respond to user queries by proactively generating user interfaces (UIs)<n>Our framework leverages structured interface-specific representations and iterative refinements to translate user queries into task-specific UIs.<n>Results show that generative interfaces consistently outperform conversational ones, with up to a 72% improvement in human preference.
arXiv Detail & Related papers (2025-08-26T17:43:20Z) - Voice CMS: updating the knowledge base of a digital assistant through conversation [0.0]
We propose a solution based on a multi-agent LLM architecture and a voice user interface (VUI) designed to update the knowledge base of a digital assistant.<n>Its usability is evaluated in comparison to a more traditional graphical content management system (CMS)<n>The findings demonstrate that, while the overall usability of the VUI is rated lower than the graphical interface, it is already preferred by users for less complex tasks.
arXiv Detail & Related papers (2025-05-28T12:40:37Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Dynamic User Interface Generation for Enhanced Human-Computer Interaction Using Variational Autoencoders [4.1676654279172265]
This study presents a novel approach for intelligent user interaction interface generation and optimization, grounded in the variational autoencoder (VAE) model.<n>The VAE-based approach significantly enhances the quality and precision of interface generation compared to other methods, including autoencoders (AE), generative adversarial networks (GAN), conditional GANs (cGAN), deep belief networks (DBN), and VAE-GAN.
arXiv Detail & Related papers (2024-12-19T04:37:47Z) - GUI Agents: A Survey [159.7656453000263]
Graphical User Interface (GUI) agents, powered by Large Foundation Models, have emerged as a transformative approach to automating human-computer interaction.<n>Motivated by the growing interest and fundamental importance of GUI agents, we provide a comprehensive survey that categorizes their benchmarks, evaluation metrics, architectures, and training methods.
arXiv Detail & Related papers (2024-12-18T04:48:28Z) - Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues [54.81155589931697]
Collaborative Instance object Navigation (CoIN) is a new task setting where the agent actively resolve uncertainties about the target instance.<n>We propose a novel training-free method, Agent-user Interaction with UncerTainty Awareness (AIUTA)<n>First, upon object detection, a Self-Questioner model initiates a self-dialogue within the agent to obtain a complete and accurate observation description.<n>An Interaction Trigger module determines whether to ask a question to the human, continue or halt navigation.
arXiv Detail & Related papers (2024-12-02T08:16:38Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Rules Of Engagement: Levelling Up To Combat Unethical CUI Design [23.01296770233131]
We propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns.
Our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces.
arXiv Detail & Related papers (2022-07-19T14:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.