LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department
- URL: http://arxiv.org/abs/2304.09064v1
- Date: Tue, 18 Apr 2023 15:35:43 GMT
- Title: LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department
- Authors: Alexandre Agossah and Fr\'ed\'erique Krupa and Matthieu Perreira Da
Silva and Patrick Le Callet
- Abstract summary: This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
- Score: 85.1523466539595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past years, AI has seen many advances in the field of NLP. This has
led to the emergence of LLMs, such as the now famous GPT-3.5, which
revolutionise the way humans can access or generate content. Current studies on
LLM-based generative tools are mainly interested in the performance of such
tools in generating relevant content (code, text or image). However, ethical
concerns related to the design and use of generative tools seem to be growing,
impacting the public acceptability for specific tasks. This paper presents a
questionnaire survey to identify the intention to use generative tools by
employees of an IT company in the context of their work. This survey is based
on empirical models measuring intention to use (TAM by Davis, 1989, and UTAUT2
by Venkatesh and al., 2008). Our results indicate a rather average
acceptability of generative tools, although the more useful the tool is
perceived to be, the higher the intention to use seems to be. Furthermore, our
analyses suggest that the frequency of use of generative tools is likely to be
a key factor in understanding how employees perceive these tools in the context
of their work. Following on from this work, we plan to investigate the nature
of the requests that may be made to these tools by specific audiences.
Related papers
- Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - COLT: Towards Completeness-Oriented Tool Retrieval for Large Language Models [60.733557487886635]
We propose a novel modelagnostic COllaborative Learning-based Tool Retrieval approach, COLT.
COLT captures semantic similarities between user queries and tool descriptions.
It also takes into account the collaborative information of tools.
arXiv Detail & Related papers (2024-05-25T06:41:23Z) - What Are Tools Anyway? A Survey from the Language Model Perspective [67.18843218893416]
Language models (LMs) are powerful yet mostly for text generation tasks.
We provide a unified definition of tools as external programs used by LMs.
We empirically study the efficiency of various tooling methods.
arXiv Detail & Related papers (2024-03-18T17:20:07Z) - Look Before You Leap: Towards Decision-Aware and Generalizable
Tool-Usage for Large Language Models [28.19932548630398]
We propose a decision-aware and generalizable tool-usage framework (DEER)
Specifically, we first construct the tool-usage samples with multiple decision branches via an automatic generation pipeline.
Our proposed DEER is effective and significantly outperforms baselines across various datasets.
arXiv Detail & Related papers (2024-02-26T16:11:03Z) - Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios [93.68764280953624]
UltraTool is a novel benchmark designed to improve and evaluate Large Language Models' ability in tool utilization.
It emphasizes real-world complexities, demanding accurate, multi-step planning for effective problem-solving.
A key feature of UltraTool is its independent evaluation of planning with natural language, which happens before tool usage.
arXiv Detail & Related papers (2024-01-30T16:52:56Z) - AI and Generative AI for Research Discovery and Summarization [3.8601741392210434]
AI and generative AI tools have burst onto the scene this year, creating incredible opportunities to increase work productivity and improve our lives.
One area that these tools can make a substantial impact is in research discovery and summarization.
We review the developments in AI and generative AI for research discovery and summarization, and propose directions where these types of tools are likely to head in the future.
arXiv Detail & Related papers (2024-01-08T18:42:55Z) - LLMs for Science: Usage for Code Generation and Data Analysis [0.07499722271664144]
Large language models (LLMs) have been touted to enable increased productivity in many areas of today's work life.
It is still unclear how the potential of LLMs will materialise in research practice.
arXiv Detail & Related papers (2023-11-28T12:29:33Z) - MetaTool Benchmark for Large Language Models: Deciding Whether to Use
Tools and Which to Use [82.24774504584066]
Large language models (LLMs) have garnered significant attention due to their impressive natural language processing (NLP) capabilities.
We introduce MetaTool, a benchmark designed to evaluate whether LLMs have tool usage awareness and can correctly choose tools.
We conduct experiments involving eight popular LLMs and find that the majority of them still struggle to effectively select tools.
arXiv Detail & Related papers (2023-10-04T19:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.