Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
- URL: http://arxiv.org/abs/2504.15549v1
- Date: Tue, 22 Apr 2025 03:11:10 GMT
- Title: Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software
- Authors: Anjali Khurana, Xiaotian Su, April Yi Wang, Parmit K Chilana,
- Abstract summary: Large Language Model (LLM)-based in-application assistants, or copilots, can automate software tasks.<n>We investigated two automation paradigms by designing and implementing a fully automated copilot and a semi-automated copilot.<n>GuidedCopilot automates trivial steps while offering step-by-step visual guidance.
- Score: 9.881955481813465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Model (LLM)-based in-application assistants, or copilots, can automate software tasks, but users often prefer learning by doing, raising questions about the optimal level of automation for an effective user experience. We investigated two automation paradigms by designing and implementing a fully automated copilot (AutoCopilot) and a semi-automated copilot (GuidedCopilot) that automates trivial steps while offering step-by-step visual guidance. In a user study (N=20) across data analysis and visual design tasks, GuidedCopilot outperformed AutoCopilot in user control, software utility, and learnability, especially for exploratory and creative tasks, while AutoCopilot saved time for simpler visual tasks. A follow-up design exploration (N=10) enhanced GuidedCopilot with task-and state-aware features, including in-context preview clips and adaptive instructions. Our findings highlight the critical role of user control and tailored guidance in designing the next generation of copilots that enhance productivity, support diverse skill levels, and foster deeper software engagement.
Related papers
- A Human Centric Requirements Engineering Framework for Assessing Github Copilot Output [0.0]
GitHub Copilot introduces new challenges in how these software tools address human needs.<n>I analyzed GitHub Copilot's interaction with users through its chat interface.<n>I established a human-centered requirements framework with clear metrics to evaluate these qualities.
arXiv Detail & Related papers (2025-08-05T21:33:23Z) - Code with Me or for Me? How Increasing AI Automation Transforms Developer Workflows [66.1850490474361]
We conduct the first academic study to explore developer interactions with coding agents.<n>We evaluate two leading copilot and agentic coding assistants, GitHub Copilot and OpenHands.<n>Our results show agents have the potential to assist developers in ways that surpass copilots.
arXiv Detail & Related papers (2025-07-10T20:12:54Z) - ComfyUI-Copilot: An Intelligent Assistant for Automated Workflow Development [45.78818581469798]
ComfyUI-Copilot is a large language model-powered plugin for ComfyUI.<n>It offers intelligent node and model recommendations, along with automated one-click workflow construction.<n>We validate the effectiveness of ComfyUI-Copilot through both offline quantitative evaluations and online user feedback.
arXiv Detail & Related papers (2025-06-05T13:20:50Z) - Automatic Programming: Large Language Models and Beyond [48.34544922560503]
We study concerns around code quality, security and related issues of programmer responsibility.
We discuss how advances in software engineering can enable automatic programming.
We conclude with a forward looking view, focusing on the programming environment of the near future.
arXiv Detail & Related papers (2024-05-03T16:19:24Z) - AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning [54.47116888545878]
AutoAct is an automatic agent learning framework for QA.
It does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models.
arXiv Detail & Related papers (2024-01-10T16:57:24Z) - VoCopilot: Voice-Activated Tracking of Everyday Interactions [1.0435741631709405]
This paper presents our efforts to design a new vocal tracking system we call VoCopilot.
VoCopilot is an end-to-end system centered around an energy-efficient acoustic hardware and firmware combined with advanced machine learning models.
arXiv Detail & Related papers (2023-12-15T23:46:52Z) - Demystifying Practices, Challenges and Expected Features of Using GitHub
Copilot [3.655281304961642]
We conducted an empirical study by collecting and analyzing the data from Stack Overflow (SO) and GitHub Discussions.
We identified the programming languages, technologies used with Copilot, functions implemented, benefits, limitations, and challenges when using Copilot.
Our results suggest that using Copilot is like a double-edged sword, which requires developers to carefully consider various aspects when deciding whether or not to use it.
arXiv Detail & Related papers (2023-09-11T16:39:37Z) - Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow [49.724842920942024]
Industries such as finance, meteorology, and energy generate vast amounts of data daily.
We propose Data-Copilot, a data analysis agent that autonomously performs querying, processing, and visualization of massive data tailored to diverse human requests.
arXiv Detail & Related papers (2023-06-12T16:12:56Z) - SheetCopilot: Bringing Software Productivity to the Next Level through
Large Language Models [60.171444066848856]
We propose a SheetCopilot agent that takes natural language task and control spreadsheet to fulfill the requirements.
We curate a representative dataset containing 221 spreadsheet control tasks and establish a fully automated evaluation pipeline.
Our SheetCopilot correctly completes 44.3% of tasks for a single generation, outperforming the strong code generation baseline by a wide margin.
arXiv Detail & Related papers (2023-05-30T17:59:30Z) - AutoML-GPT: Automatic Machine Learning with GPT [74.30699827690596]
We propose developing task-oriented prompts and automatically utilizing large language models (LLMs) to automate the training pipeline.
We present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyper parameters.
This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas.
arXiv Detail & Related papers (2023-05-04T02:09:43Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming [28.254978977288868]
We studied GitHub Copilot, a code-recommendation system used by millions of programmers daily.
We developed CUPS, a taxonomy of common programmer activities when interacting with Copilot.
Our insights reveal how programmers interact with Copilot and motivate new interface designs and metrics.
arXiv Detail & Related papers (2022-10-25T20:01:15Z) - Building Mental Models through Preview of Autopilot Behaviors [20.664610032249037]
We in-troduce our framework, calledAutoPreview, to enable humans to preview autopilot behaviors prior to direct interaction with the vehicle.
Ourresults suggest that theAutoPreview framework does, in fact, helpusers understand autopilot behavior and develop appropriate men-tal models.
arXiv Detail & Related papers (2021-04-12T13:46:55Z) - AutoPreview: A Framework for Autopilot Behavior Understanding [16.177399201198636]
We propose a simple but effective framework, AutoPreview, to enable consumers to preview a target autopilot potential actions.
For a given target autopilot, we design a delegate policy that replicates the target autopilot behavior with explainable action representations.
We conduct a pilot study to investigate whether or not AutoPreview provides deeper understanding about autopilot behavior when experiencing a new autopilot policy.
arXiv Detail & Related papers (2021-02-25T17:40:59Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.