Generative AI in Knowledge Work: Design Implications for Data Navigation and Decision-Making
- URL: http://arxiv.org/abs/2503.18419v1
- Date: Mon, 24 Mar 2025 08:02:44 GMT
- Title: Generative AI in Knowledge Work: Design Implications for Data Navigation and Decision-Making
- Authors: Bhada Yun, Dana Feng, Ace S. Chen, Afshin Nikzad, Niloufar Salehi,
- Abstract summary: We developed Yodeai, an AI-enabled system, to explore both the opportunities and limitations of AI in knowledge work.<n>We identified three key requirements for Generative AI in knowledge work: adaptable user control, transparent collaboration mechanisms, and the ability to integrate background knowledge with external information.
- Score: 6.460380734209551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our study of 20 knowledge workers revealed a common challenge: the difficulty of synthesizing unstructured information scattered across multiple platforms to make informed decisions. Drawing on their vision of an ideal knowledge synthesis tool, we developed Yodeai, an AI-enabled system, to explore both the opportunities and limitations of AI in knowledge work. Through a user study with 16 product managers, we identified three key requirements for Generative AI in knowledge work: adaptable user control, transparent collaboration mechanisms, and the ability to integrate background knowledge with external information. However, we also found significant limitations, including overreliance on AI, user isolation, and contextual factors outside the AI's reach. As AI tools become increasingly prevalent in professional settings, we propose design principles that emphasize adaptability to diverse workflows, accountability in personal and collaborative contexts, and context-aware interoperability to guide the development of human-centered AI systems for product managers and knowledge workers.
Related papers
- AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Empowering AIOps: Leveraging Large Language Models for IT Operations Management [0.6752538702870792]
We aim to integrate traditional predictive machine learning models with generative AI technologies like Large Language Models (LLMs)
LLMs enable organizations to process and analyze vast amounts of unstructured data, such as system logs, incident reports, and technical documentation.
We propose innovative methods to tackle persistent challenges in AIOps and enhance the capabilities of IT operations management.
arXiv Detail & Related papers (2025-01-21T19:17:46Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - The AI-DEC: A Card-based Design Method for User-centered AI Explanations [20.658833770179903]
We develop a design method, called AI-DEC, that defines four dimensions of AI explanations.
We evaluate this method through co-design sessions with workers in healthcare, finance, and management industries.
We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
arXiv Detail & Related papers (2024-05-26T22:18:38Z) - In-IDE Human-AI Experience in the Era of Large Language Models; A
Literature Review [2.6703221234079946]
The study of in-IDE Human-AI Experience is critical in understanding how these AI tools are transforming the software development process.
We conducted a literature review to study the current state of in-IDE Human-AI Experience research.
arXiv Detail & Related papers (2024-01-19T14:55:51Z) - Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service
Co-Creation with LLM-Based Agents [16.560339524456268]
This study serves as a primer for interested service providers to determine if and how Large Language Models (LLMs) technology will be integrated for their practitioners and the broader community.
We investigate the mutual learning journey of non-AI experts and AI through CoAGent, a service co-creation tool with LLM-based agents.
arXiv Detail & Related papers (2023-10-23T16:11:48Z) - Agency and legibility for artists through Experiential AI [12.941266914933454]
Experiential AI is an emerging research field that addresses the challenge of making AI tangible and explicit.
We report on an empirical case study of an experiential AI system designed for creative data exploration.
We discuss how experiential AI can increase legibility and agency for artists.
arXiv Detail & Related papers (2023-06-04T11:00:07Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.