Generative AI in Knowledge Work: Perception, Usefulness, and Acceptance of Microsoft 365 Copilot
- URL: http://arxiv.org/abs/2602.18576v1
- Date: Fri, 20 Feb 2026 19:32:58 GMT
- Title: Generative AI in Knowledge Work: Perception, Usefulness, and Acceptance of Microsoft 365 Copilot
- Authors: Carsten F. Schmidt, Sophie Petzolt, Wolfgang Beinhauer, Ingo Weber, Stefan Langer,
- Abstract summary: We assess usefulness, ease of use, output quality and reliability, and usefulness for typical knowledge-work activities.<n>Copilot is widely viewed as user-friendly and technically reliable, with greatest added value for clearly structured, text-based tasks.
- Score: 1.3362350462539474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study analyzes the introduction of Microsoft 365 Copilot in a non-university research organization using a repeated cross-sectional employee survey. We assess usefulness, ease of use, output quality and reliability, and usefulness for typical knowledge-work activities. Administrative staff report higher usefulness and reliability, whereas scientific staff develop more positive assessments over time, especially regarding productivity and workload reduction. Copilot is widely viewed as user-friendly and technically reliable, with greatest added value for clearly structured, text-based tasks. The findings highlight learning and routinization effects when embedding generative AI into work processes and stress the need for context-sensitive implementation, role-specific training and governance to foster sustainable acceptance of generative AI in knowledge-intensive organizations.
Related papers
- Boosting Deep Reinforcement Learning with Semantic Knowledge for Robotic Manipulators [2.6913398550088483]
Deep Reinforcement Learning (DRL) is a powerful framework for solving complex sequential decision-making problems.<n>We propose a novel integration of DRL with semantic knowledge in the form of Knowledge Graph Embeddings (KGEs)<n>Our architecture combines KGEs with visual observations, enabling the agent to exploit environmental knowledge during training.
arXiv Detail & Related papers (2026-01-23T16:14:28Z) - A Comprehensive Empirical Evaluation of Agent Frameworks on Code-centric Software Engineering Tasks [14.762911285395047]
We evaluate seven general-purpose agent frameworks across three representative code-centric tasks.<n>Our findings reveal distinct capability patterns and trade-offs among the evaluated frameworks.<n>For overhead, software development incurs the highest monetary cost, while GPTswarm remains the most cost-efficient.
arXiv Detail & Related papers (2025-11-02T09:46:59Z) - Enabling Self-Improving Agents to Learn at Test Time With Human-In-The-Loop Guidance [58.21767225794469]
Large language model (LLM) agents often struggle in environments where rules and required domain knowledge frequently change.<n>We propose the Adaptive Reflective Interactive Agent (ARIA) to continuously learn updated domain knowledge at test time.<n>ARIA is deployed within TikTok Pay serving over 150 million monthly active users.
arXiv Detail & Related papers (2025-07-23T02:12:32Z) - Active Learning Methods for Efficient Data Utilization and Model Performance Enhancement [5.4044723481768235]
This paper gives a detailed overview of Active Learning (AL), which is a strategy in machine learning that helps models achieve better performance using fewer labeled examples.<n>It introduces the basic concepts of AL and discusses how it is used in various fields such as computer vision, natural language processing, transfer learning, and real-world applications.
arXiv Detail & Related papers (2025-04-21T20:42:13Z) - Agentic Knowledgeable Self-awareness [79.25908923383776]
KnowSelf is a data-centric approach that applies agents with knowledgeable self-awareness like humans.<n>Our experiments demonstrate that KnowSelf can outperform various strong baselines on different tasks and models with minimal use of external knowledge.
arXiv Detail & Related papers (2025-04-04T16:03:38Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Towards Decoding Developer Cognition in the Age of AI Assistants [9.887133861477233]
We propose a controlled observational study combining physiological measurements (EEG and eye tracking) with interaction data to examine developers' use of AI-assisted programming tools.<n>We will recruit professional developers to complete programming tasks both with and without AI assistance while measuring their cognitive load and task completion time.
arXiv Detail & Related papers (2025-01-05T23:25:21Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
We present KBAlign, a self-supervised framework that enhances RAG systems through efficient model adaptation.<n>Our key insight is to leverage the model's intrinsic capabilities for knowledge alignment through two innovative mechanisms.<n> Experiments demonstrate that KBAlign can achieve 90% of the performance gain obtained through GPT-4-supervised adaptation.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.