HR-Agent: A Task-Oriented Dialogue (TOD) LLM Agent Tailored for HR Applications
- URL: http://arxiv.org/abs/2410.11239v1
- Date: Tue, 15 Oct 2024 03:51:08 GMT
- Title: HR-Agent: A Task-Oriented Dialogue (TOD) LLM Agent Tailored for HR Applications
- Authors: Weijie Xu, Jay Desai, Fanyou Wu, Josef Valvoda, Srinivasan H. Sengamedu,
- Abstract summary: We present HR-Agent, an efficient, confidential, and HR-specific LLM-based task-oriented dialogue system tailored for automating repetitive HR processes.
Since conversation data is not sent to an LLM during inference, it preserves confidentiality required in HR-related tasks.
- Score: 10.383829270485247
- License:
- Abstract: Recent LLM (Large Language Models) advancements benefit many fields such as education and finance, but HR has hundreds of repetitive processes, such as access requests, medical claim filing and time-off submissions, which are unaddressed. We relate these tasks to the LLM agent, which has addressed tasks such as writing assisting and customer support. We present HR-Agent, an efficient, confidential, and HR-specific LLM-based task-oriented dialogue system tailored for automating repetitive HR processes such as medical claims and access requests. Since conversation data is not sent to an LLM during inference, it preserves confidentiality required in HR-related tasks.
Related papers
- Beyond-RAG: Question Identification and Answer Generation in Real-Time Conversations [0.0]
In customer contact centers, human agents often struggle with long average handling times (AHT)
We propose a decision support system that can look beyond RAG by first identifying customer questions in real time.
If the query matches an FAQ, the system retrieves the answer directly from the FAQ database; otherwise, it generates answers via RAG.
arXiv Detail & Related papers (2024-10-14T04:06:22Z) - OfficeBench: Benchmarking Language Agents across Multiple Applications for Office Automation [51.27062359412488]
Office automation significantly enhances human productivity by automatically finishing routine tasks in the workflow.
We introduce OfficeBench, one of the first office automation benchmarks for evaluating current LLM agents' capability to address office tasks in realistic office.
Applying our customized evaluation methods on each task, we find that GPT-4 Omni achieves the highest pass rate of 47.00%, demonstrating a decent performance in handling office tasks.
arXiv Detail & Related papers (2024-07-26T19:27:17Z) - Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)
It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.
The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z) - LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions [8.55917897789612]
We focus on the cooperative tasks of multiple agents with a common goal and communication among them.
We also consider human-in/on-the-loop scenarios enabled by the language component in the framework.
arXiv Detail & Related papers (2024-05-17T22:10:23Z) - HR-MultiWOZ: A Task Oriented Dialogue (TOD) Dataset for HR LLM Agent [6.764665650605542]
We introduce HR-Multiwoz, a fully-labeled dataset of 550 conversations spanning 10 HR domains.
It is the first labeled open-sourced conversation dataset in the HR domain for NLP research.
It provides a detailed recipe for the data generation procedure along with data analysis and human evaluations.
arXiv Detail & Related papers (2024-02-01T21:10:44Z) - EHRAgent: Code Empowers Large Language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records [47.5632532642591]
Large language models (LLMs) have demonstrated exceptional capabilities in planning and tool utilization.
We propose EHRAgent, an LLM agent empowered with a code interface, to autonomously generate and execute code for multi-tabular reasoning.
arXiv Detail & Related papers (2024-01-13T18:09:05Z) - Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk [11.706292228586332]
Large language models (LLMs) are powerful dialogue agents, but specializing them towards fulfilling a specific function can be challenging.
We propose a more effective method for data collection through LLMs engaging in a conversation in various roles.
This approach generates a training data via "self-talk" of LLMs that can be refined and utilized for supervised fine-tuning.
arXiv Detail & Related papers (2024-01-10T09:49:10Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z) - Large Language Models are Strong Zero-Shot Retriever [89.16756291653371]
We propose a simple method that applies a large language model (LLM) to large-scale retrieval in zero-shot scenarios.
Our method, the Language language model as Retriever (LameR), is built upon no other neural models but an LLM.
arXiv Detail & Related papers (2023-04-27T14:45:55Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.