How to Enable Effective Cooperation Between Humans and NLP Models: A Survey of Principles, Formalizations, and Beyond
- URL: http://arxiv.org/abs/2501.05714v1
- Date: Fri, 10 Jan 2025 05:15:14 GMT
- Title: How to Enable Effective Cooperation Between Humans and NLP Models: A Survey of Principles, Formalizations, and Beyond
- Authors: Chen Huang, Yang Deng, Wenqiang Lei, Jiancheng Lv, Tat-Seng Chua, Jimmy Xiangji Huang,
- Abstract summary: We present a thorough review of human-model cooperation, exploring its principles, formalizations, and open challenges.
We introduce a new taxonomy that provides a unified perspective to summarize existing approaches.
Also, we discuss potential frontier areas and their corresponding challenges.
- Score: 73.5546464126465
- License:
- Abstract: With the advancement of large language models (LLMs), intelligent models have evolved from mere tools to autonomous agents with their own goals and strategies for cooperating with humans. This evolution has birthed a novel paradigm in NLP, i.e., human-model cooperation, that has yielded remarkable progress in numerous NLP tasks in recent years. In this paper, we take the first step to present a thorough review of human-model cooperation, exploring its principles, formalizations, and open challenges. In particular, we introduce a new taxonomy that provides a unified perspective to summarize existing approaches. Also, we discuss potential frontier areas and their corresponding challenges. We regard our work as an entry point, paving the way for more breakthrough research in this regard.
Related papers
- Human-Centric Foundation Models: Perception, Generation and Agentic Modeling [79.97999901785772]
Human-centric Foundation Models unify diverse human-centric tasks into a single framework.
We present a comprehensive overview of HcFMs by proposing a taxonomy that categorizes current approaches into four groups.
This survey aims to serve as a roadmap for researchers and practitioners working towards more robust, versatile, and intelligent digital human and embodiments modeling.
arXiv Detail & Related papers (2025-02-12T16:38:40Z) - Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models [46.09562860220433]
We introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM)
Our approach significantly improves the accuracy of the RM on established human preference datasets.
arXiv Detail & Related papers (2024-10-02T13:24:56Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Towards Human-AI Mutual Learning: A New Research Paradigm [9.182022050832108]
This paper describes a new research paradigm for studying human-AI collaboration, named "human-AI mutual learning"
We describe relevant methodologies, motivations, domain examples, benefits, challenges, and future research agenda under this paradigm.
arXiv Detail & Related papers (2024-05-07T21:59:57Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Human-AI Symbiosis: A Survey of Current Approaches [18.252264744963394]
We highlight various aspects of works on the human-AI team such as the flow of complementing, task horizon, model representation, knowledge level, and teaming goal.
We hope that the survey will provide a more clear connection between the works in the human-AI team and guidance to new researchers in this area.
arXiv Detail & Related papers (2021-03-18T02:39:28Z) - Putting Humans in the Natural Language Processing Loop: A Survey [13.53277201606357]
How can we design Natural Language Processing (NLP) systems that learn from human feedback?
There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself.
We present a survey of HITL NLP work from both Machine Learning (ML) and Human-Computer Interaction (HCI) communities.
arXiv Detail & Related papers (2021-03-06T06:26:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.