Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work
- URL: http://arxiv.org/abs/2409.20192v1
- Date: Mon, 30 Sep 2024 11:08:27 GMT
- Title: Factory Operators' Perspectives on Cognitive Assistants for Knowledge Sharing: Challenges, Risks, and Impact on Work
- Authors: Samuel Kernan Freire, Tianhao He, Chaofan Wang, Evangelos Niforatos, Alessandro Bozzon,
- Abstract summary: This study investigates the real-world impact of deploying Cognitive Assistants (CAs) in factories.
Our results indicate that while CAs have the potential to significantly improve efficiency through knowledge sharing, they also introduce concerns around workplace surveillance.
Our findings stress the importance of addressing privacy, knowledge contribution burdens, and tensions between factory operators and their managers.
- Score: 51.78233291198334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the shift towards human-centered manufacturing, our two-year longitudinal study investigates the real-world impact of deploying Cognitive Assistants (CAs) in factories. The CAs were designed to facilitate knowledge sharing among factory operators. Our investigation focused on smartphone-based voice assistants and LLM-powered chatbots, examining their usability and utility in a real-world factory setting. Based on the qualitative feedback we collected during the deployments of CAs at the factories, we conducted a thematic analysis to investigate the perceptions, challenges, and overall impact on workflow and knowledge sharing. Our results indicate that while CAs have the potential to significantly improve efficiency through knowledge sharing and quicker resolution of production issues, they also introduce concerns around workplace surveillance, the types of knowledge that can be shared, and shortcomings compared to human-to-human knowledge sharing. Additionally, our findings stress the importance of addressing privacy, knowledge contribution burdens, and tensions between factory operators and their managers.
Related papers
- From Pre-training Corpora to Large Language Models: What Factors Influence LLM Performance in Causal Discovery Tasks? [51.42906577386907]
This study explores the factors influencing the performance of Large Language Models (LLMs) in causal discovery tasks.
A higher frequency of causal mentions correlates with better model performance, suggesting that extensive exposure to causal information during training enhances the models' causal discovery capabilities.
arXiv Detail & Related papers (2024-07-29T01:45:05Z) - An Empirical Exploration of Trust Dynamics in LLM Supply Chains [11.057310859302248]
We argue for broadening the scope of studies addressing trust in AI' by accounting for the complex and dynamic supply chains that AI systems result from.
Our work reveals additional types of trustors and trustees and new factors impacting their trust relationships.
arXiv Detail & Related papers (2024-05-25T17:37:56Z) - WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - Knowledge Sharing in Manufacturing using Large Language Models: User
Evaluation and Model Benchmarking [7.976952274443561]
Large Language Model (LLM)-based system designed to retrieve information from factory documentation and knowledge shared by expert operators.
System aims to efficiently answer queries from operators and facilitate the sharing of new knowledge.
arXiv Detail & Related papers (2024-01-10T14:53:18Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Assessing Trust in Construction AI-Powered Collaborative Robots using
Structural Equation Modeling [0.0]
Safety and reliability are significant factors for the adoption of AI-powered cobots in construction.
Fear of being replaced resulting from the use of cobots can have a substantial effect on the mental health of the affected workers.
A lower error rate in jobs involving cobots, safety measurements, and security of data collected by cobots significantly impact reliability.
The transparency of cobots' inner workings can benefit accuracy, robustness, security, privacy, and communication.
arXiv Detail & Related papers (2023-08-28T16:39:22Z) - Investigating the Factual Knowledge Boundary of Large Language Models
with Retrieval Augmentation [91.30946119104111]
We show that large language models (LLMs) possess unwavering confidence in their capabilities to respond to questions.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We also find that LLMs have a propensity to rely on the provided retrieval results when formulating answers.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Trustworthy, responsible, ethical AI in manufacturing and supply chains:
synthesis and emerging research questions [59.34177693293227]
We explore the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing.
We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern.
arXiv Detail & Related papers (2023-05-19T10:43:06Z) - Pitfalls in Effective Knowledge Management: Insights from an
International Information Technology Organization [8.847473225998908]
This study aims to identify hindering factors that prevent individuals from effectively sharing and managing knowledge.
Several hindering factors were identified, grouped into personal social topics, organizational social topics, technical topics, environmental topics, and interrelated social and technical topics.
The presented recommendations for mitigating these hindering factors are focused on improving employees' actions, such as offering training and guidelines to follow.
arXiv Detail & Related papers (2023-04-16T09:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.