MINDECHO: Role-Playing Language Agents for Key Opinion Leaders
- URL: http://arxiv.org/abs/2407.05305v2
- Date: Wed, 9 Oct 2024 07:19:34 GMT
- Title: MINDECHO: Role-Playing Language Agents for Key Opinion Leaders
- Authors: Rui Xu, Dakuan Lu, Xiaoyu Tan, Xintao Wang, Siyu Yuan, Jiangjie Chen, Wei Chu, Yinghui Xu,
- Abstract summary: This paper introduces MINDECHO, a framework for the development and evaluation of Key Opinion Leaders (KOLs)
MINDECHO collects KOL data from Internet video transcripts in various professional fields, and synthesizes their conversations leveraging GPT-4.
Our evaluation covers both general dimensions (ie, knowledge and tones) and fan-centric dimensions for KOLs.
- Score: 50.43050502970816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models~(LLMs) have demonstrated impressive performance in various applications, among which role-playing language agents (RPLAs) have engaged a broad user base. Now, there is a growing demand for RPLAs that represent Key Opinion Leaders (KOLs), \ie, Internet celebrities who shape the trends and opinions in their domains. However, research in this line remains underexplored. In this paper, we hence introduce MINDECHO, a comprehensive framework for the development and evaluation of KOL RPLAs. MINDECHO collects KOL data from Internet video transcripts in various professional fields, and synthesizes their conversations leveraging GPT-4. Then, the conversations and the transcripts are used for individualized model training and inference-time retrieval, respectively. Our evaluation covers both general dimensions (\ie, knowledge and tones) and fan-centric dimensions for KOLs. Extensive experiments validate the effectiveness of MINDECHO in developing and evaluating KOL RPLAs.
Related papers
- ERABAL: Enhancing Role-Playing Agents through Boundary-Aware Learning [17.5855800570993]
Role-playing is an emerging application in the field of Human-Computer Interaction (HCI)
Despite significant progress, role-playing agents (RPLAs) still struggle with maintaining role-consistency across conversations.
We present ERABAL, a framework aimed at enhancing RPLAs' role-playing capabilities through boundary-aware learning.
arXiv Detail & Related papers (2024-09-23T05:12:13Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [62.0123588983514]
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields.
We reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers.
We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources.
arXiv Detail & Related papers (2024-06-09T08:24:17Z) - Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting [51.54907796704785]
Existing methods rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them.
Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates.
We propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol.
arXiv Detail & Related papers (2024-05-28T12:23:16Z) - From Persona to Personalization: A Survey on Role-Playing Language Agents [52.783043059715546]
Recent advancements in large language models (LLMs) have boosted the rise of Role-Playing Language Agents (RPLAs)
RPLAs achieve a remarkable sense of human likeness and vivid role-playing performance.
They have catalyzed numerous AI applications, such as emotional companions, interactive video games, personalized assistants and copilots.
arXiv Detail & Related papers (2024-04-28T15:56:41Z) - RoleEval: A Bilingual Role Evaluation Benchmark for Large Language
Models [44.105939096171454]
This paper introduces RoleEval, a benchmark designed to assess the memorization, utilization, and reasoning capabilities of role knowledge.
RoleEval comprises RoleEval-Global and RoleEval-Chinese, with 6,000 Chinese-English parallel multiple-choice questions.
arXiv Detail & Related papers (2023-12-26T17:40:55Z) - OCRBench: On the Hidden Mystery of OCR in Large Multimodal Models [122.27878464009181]
We conducted a comprehensive evaluation of Large Multimodal Models, such as GPT4V and Gemini, in various text-related visual tasks.
OCRBench contains 29 datasets, making it the most comprehensive OCR evaluation benchmark available.
arXiv Detail & Related papers (2023-05-13T11:28:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.