A Taxonomy of Human and ML Strengths in Decision-Making to Investigate
Human-ML Complementarity
- URL: http://arxiv.org/abs/2204.10806v3
- Date: Sun, 5 Nov 2023 21:39:58 GMT
- Title: A Taxonomy of Human and ML Strengths in Decision-Making to Investigate
Human-ML Complementarity
- Authors: Charvi Rastogi, Liu Leqi, Kenneth Holstein, Hoda Heidari
- Abstract summary: We propose a taxonomy characterizing distinct ways in which human and ML-based decision-making can differ.
We provide a mathematical aggregation framework to examine enabling conditions for complementarity.
- Score: 30.23729174053152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hybrid human-ML systems increasingly make consequential decisions in a wide
range of domains. These systems are often introduced with the expectation that
the combined human-ML system will achieve complementary performance, that is,
the combined decision-making system will be an improvement compared with either
decision-making agent in isolation. However, empirical results have been mixed,
and existing research rarely articulates the sources and mechanisms by which
complementary performance is expected to arise. Our goal in this work is to
provide conceptual tools to advance the way researchers reason and communicate
about human-ML complementarity. Drawing upon prior literature in human
psychology, machine learning, and human-computer interaction, we propose a
taxonomy characterizing distinct ways in which human and ML-based
decision-making can differ. In doing so, we conceptually map potential
mechanisms by which combining human and ML decision-making may yield
complementary performance, developing a language for the research community to
reason about design of hybrid systems in any decision-making domain. To
illustrate how our taxonomy can be used to investigate complementarity, we
provide a mathematical aggregation framework to examine enabling conditions for
complementarity. Through synthetic simulations, we demonstrate how this
framework can be used to explore specific aspects of our taxonomy and shed
light on the optimal mechanisms for combining human-ML judgments
Related papers
- Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text Generation Systems [55.99010491370177]
How to intervene on such system outputs to mitigate anthropomorphic behaviors and their attendant harmful outcomes remains understudied.
We compile an inventory of interventions grounded both in prior literature and a crowdsourced study where participants edited system outputs to make them less human-like.
arXiv Detail & Related papers (2025-02-19T18:06:37Z) - Word Synchronization Challenge: A Benchmark for Word Association Responses for LLMs [4.352318127577628]
This paper introduces the Word Synchronization Challenge, a novel benchmark to evaluate large language models (LLMs) in Human-Computer Interaction (HCI)
This benchmark uses a dynamic game-like framework to test LLMs ability to mimic human cognitive processes through word associations.
arXiv Detail & Related papers (2025-02-12T11:30:28Z) - Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - Exploring the Potential of Human-LLM Synergy in Advancing Qualitative Analysis: A Case Study on Mental-Illness Stigma [6.593116883521213]
Large language models (LLMs) can perform qualitative coding within existing schemes, but their potential for collaborative human-LLM discovery is still underexplored.
We propose CHALET, a novel methodology that leverages the human-LLM collaboration paradigm to facilitate conceptualization and empower qualitative research.
arXiv Detail & Related papers (2024-05-09T13:27:22Z) - Computational Experiments Meet Large Language Model Based Agents: A
Survey and Perspective [16.08517740276261]
Computational experiments have emerged as a valuable method for studying complex systems.
accurately representing real social systems in Agent-based Modeling (ABM) is challenging due to the diverse and intricate characteristics of humans.
The integration of Large Language Models (LLMs) has been proposed, enabling agents to possess anthropomorphic abilities.
arXiv Detail & Related papers (2024-02-01T01:17:46Z) - Confounding-Robust Policy Improvement with Human-AI Teams [9.823906892919746]
We propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM)
Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden.
arXiv Detail & Related papers (2023-10-13T02:39:52Z) - Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View [60.80731090755224]
This paper probes the collaboration mechanisms among contemporary NLP systems by practical experiments with theoretical insights.
We fabricate four unique societies' comprised of LLM agents, where each agent is characterized by a specific trait' (easy-going or overconfident) and engages in collaboration with a distinct thinking pattern' (debate or reflection)
Our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories.
arXiv Detail & Related papers (2023-10-03T15:05:52Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.