A Survey on Human-AI Collaboration with Large Foundation Models
- URL: http://arxiv.org/abs/2403.04931v3
- Date: Tue, 02 Sep 2025 19:24:46 GMT
- Title: A Survey on Human-AI Collaboration with Large Foundation Models
- Authors: Vanshika Vats, Marzia Binta Nizam, Minghao Liu, Ziyuan Wang, Richard Ho, Mohnish Sai Prasad, Vincent Titterton, Sai Venkat Malreddy, Riya Aggarwal, Yanwen Xu, Lei Ding, Jay Mehta, Nathan Grinnell, Li Liu, Sijia Zhong, Devanathan Nallur Gandamani, Xinyi Tang, Rohan Ghosalkar, Celeste Shen, Rachel Shen, Nafisa Hussain, Kesav Ravichandran, James Davis,
- Abstract summary: Human-AI (HAI) Collaboration has become pivotal for advancing problem-solving and decision-making processes.<n>The advent of Large Foundation Models (LFMs) has greatly expanded its potential, offering unprecedented capabilities.<n>This paper reviews the crucial integration of LFMs with HAI, highlighting both opportunities and risks.
- Score: 11.837685062760132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the capabilities of artificial intelligence (AI) continue to expand rapidly, Human-AI (HAI) Collaboration, combining human intellect and AI systems, has become pivotal for advancing problem-solving and decision-making processes. The advent of Large Foundation Models (LFMs) has greatly expanded its potential, offering unprecedented capabilities by leveraging vast amounts of data to understand and predict complex patterns. At the same time, realizing this potential responsibly requires addressing persistent challenges related to safety, fairness, and control. This paper reviews the crucial integration of LFMs with HAI, highlighting both opportunities and risks. We structure our analysis around four areas: human-guided model development, collaborative design principles, ethical and governance frameworks, and applications in high-stakes domains. Our review shows that successful HAI systems are not the automatic result of stronger models but the product of careful, human-centered design. By identifying key open challenges, this survey aims to give insight into current and future research that turns the raw power of LFMs into partnerships that are reliable, trustworthy, and beneficial to society.
Related papers
- Human-AI Use Patterns for Decision-Making in Disaster Scenarios: A Systematic Review [0.0]
We identify four major categories: Human-AI Decision Support Systems, Task and Resource Coordination, Trust and Transparency, and Simulation and Training.<n>Within these, we analyze sub-patterns such as cognitive-augmented intelligence, multi-agent coordination, explainable AI, and virtual training environments.<n>Our review highlights how AI systems may enhance situational awareness, improve response efficiency, and support complex decision-making, while also surfacing critical limitations in scalability, interpretability, and system interoperability.
arXiv Detail & Related papers (2025-09-15T15:18:49Z) - Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities [117.49715661395294]
Data structurization can play a promising role by transforming intricate and disorganized data into well-structured forms.<n>This survey presents a first systematic review of how graphs can empower AI agents.
arXiv Detail & Related papers (2025-06-22T12:59:12Z) - Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Toward a Public and Secure Generative AI: A Comparative Analysis of Open and Closed LLMs [0.0]
This study aims to critically evaluate and compare the characteristics, opportunities, and challenges of open and closed generative AI models.<n>The proposed framework outlines key dimensions, openness, public governance, and security, as essential pillars for shaping the future of trustworthy and inclusive Gen AI.
arXiv Detail & Related papers (2025-05-15T15:21:09Z) - LLM-Based Human-Agent Collaboration and Interaction Systems: A Survey [34.275920463375684]
Large language models (LLMs) have sparked growing interest in building fully autonomous agents.<n>LLM-HAS incorporate human-provided information, feedback, or control into the agent system to enhance system performance, reliability and safety.<n>This paper provides the first comprehensive and structured survey of LLM-HAS.
arXiv Detail & Related papers (2025-05-01T08:29:26Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - The Value of Information in Human-AI Decision-making [23.353778024330165]
We contribute a decision-theoretic framework for characterizing the value of information.<n>We present a novel explanation technique that adapts SHAP explanations to highlight human-complementing information.<n>We show that our measure of complementary information can be used to identify which AI model will best complement human decisions.
arXiv Detail & Related papers (2025-02-10T04:50:42Z) - AI-Driven Human-Autonomy Teaming in Tactical Operations: Proposed Framework, Challenges, and Future Directions [10.16399860867284]
Artificial Intelligence (AI) techniques are transforming tactical operations by augmenting human decision-making capabilities.
This paper explores AI-driven Human-Autonomy Teaming (HAT) as a transformative approach.
We propose a comprehensive framework that addresses the key components of AI-driven HAT.
arXiv Detail & Related papers (2024-10-28T15:05:16Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Explainable Interface for Human-Autonomy Teaming: A Survey [12.26178592621411]
This paper conducts a thoughtful study on the underexplored domain of Explainable Interface (EI) in HAT systems.
We explore the design, development, and evaluation of EI within XAI-enhanced HAT systems.
We contribute to a novel framework for EI, addressing the unique challenges in HAT.
arXiv Detail & Related papers (2024-05-04T06:35:38Z) - On the Challenges and Opportunities in Generative AI [155.030542942979]
We argue that current large-scale generative AI models exhibit several fundamental shortcomings that hinder their widespread adoption across domains.<n>We aim to provide researchers with insights for exploring fruitful research directions, thus fostering the development of more robust and accessible generative AI solutions.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - The Essential Role of Causality in Foundation World Models for Embodied AI [102.75402420915965]
Embodied AI agents will require the ability to perform new tasks in many different real-world environments.
Current foundation models fail to accurately model physical interactions and are therefore insufficient for Embodied AI.
The study of causality lends itself to the construction of veridical world models.
arXiv Detail & Related papers (2024-02-06T17:15:33Z) - Position Paper: Assessing Robustness, Privacy, and Fairness in Federated Learning Integrated with Foundation Models [13.276195878267922]
Integration of Foundation Models (FMs) into Federated Learning (FL) introduces novel issues in terms of robustness, privacy, and fairness.<n>We analyze the trade-offs involved, uncover the threats and issues introduced by this integration, and propose a set of criteria and strategies for navigating these challenges.
arXiv Detail & Related papers (2024-02-02T19:26:00Z) - Artificial Intelligence for Operations Research: Revolutionizing the Operations Research Process [15.471884798655063]
The rapid advancement of artificial intelligence (AI) techniques has opened up new opportunities to revolutionize various fields, including operations research (OR)
This survey paper explores the integration of AI within the OR process (AI4OR) to enhance its effectiveness and efficiency across multiple stages.
The synergy between AI and OR is poised to drive significant advancements and novel solutions in a multitude of domains.
arXiv Detail & Related papers (2024-01-06T15:55:14Z) - Human-AI Collaboration in Thematic Analysis using ChatGPT: A User Study
and Design Recommendations [0.0]
Generative artificial intelligence (GenAI) offers promising potential for advancing human-AI collaboration in qualitative research.
This work delves into researchers' perceptions of their collaboration with GenAI, specifically ChatGPT.
arXiv Detail & Related papers (2023-11-07T13:54:56Z) - Confounding-Robust Policy Improvement with Human-AI Teams [9.823906892919746]
We propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM)
Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden.
arXiv Detail & Related papers (2023-10-13T02:39:52Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.