Causal Effects with Unobserved Unit Types in Interacting Human-AI Systems
- URL: http://arxiv.org/abs/2603.01339v1
- Date: Mon, 02 Mar 2026 00:31:48 GMT
- Title: Causal Effects with Unobserved Unit Types in Interacting Human-AI Systems
- Authors: William Overman, Sadegh Shirani, Mohsen Bayati,
- Abstract summary: We study experiments on interacting populations of humans and AI agents, where both unit types and the interaction network remain unobserved.<n>We assume a human-AI prior that gives each unit a probability of being human.<n>We then model outcome dynamics through a causal message passing framework and analyze sample-mean outcomes across subpopulations.
- Score: 8.500597009666526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study experiments on interacting populations of humans and AI agents, where both unit types and the interaction network remain unobserved. Although causal effects propagate throughout the system, the goal is to estimate effects on humans. Examples include online platforms where human users interact alongside AI-driven accounts. We assume a human-AI prior that gives each unit a probability of being human. While humans cannot be distinguished at the unit level, the prior allows us to compute the average human composition within large subpopulations. We then model outcome dynamics through a causal message passing (CMP) framework and analyze sample-mean outcomes across subpopulations. We show that by constructing subpopulations that vary in expected human composition and treatment exposure, one can consistently recover human-specific causal effects. Our results characterize when distributional knowledge of population composition (without observing unit types or the interaction network) is sufficient for identification. We validate the approach on a simulated human-AI platform driven by behaviorally differentiated LLM agents. Together, these results provide a theoretical and practical framework for experimentation in emerging human-AI systems.
Related papers
- Humanlike AI Design Increases Anthropomorphism but Yields Divergent Outcomes on Engagement and Trust Globally [5.379750053447755]
Over a billion users across the globe interact with AI systems engineered with increasing sophistication to mimic human traits.<n>This shift has triggered urgent debate regarding Anthropomorphism, the attribution of human characteristics to synthetic agents, and its potential to induce misplaced trust or emotional dependency.<n>Prevailing safety frameworks continue to rely on theoretical assumptions derived from Western populations, overlooking the global diversity of AI users.
arXiv Detail & Related papers (2025-12-19T18:57:53Z) - HumanPCR: Probing MLLM Capabilities in Diverse Human-Centric Scenes [72.26829188852139]
HumanPCR is an evaluation suite for probing MLLMs' capacity about human-related visual contexts.<n>Human-P, HumanThought-C, and Human-R feature over 6,000 human-verified multiple choice questions.<n>Human-R offers a challenging manually curated video reasoning test.
arXiv Detail & Related papers (2025-08-19T09:52:04Z) - Co-Creative Learning via Metropolis-Hastings Interaction between Humans and AI [6.712251433139411]
We propose co-creative learning where humans and AI mutually integrate their partial perceptual information and knowledge to construct shared external representations.<n>We empirically test this framework using a human-AI interaction model based on the Metropolis-Hastings naming game (MHNG)<n>Results show that human-AI pairs with an MH-based agent significantly improved categorization accuracy through interaction.<n>Human acceptance behavior aligned closely with the MH-derived acceptance probability.
arXiv Detail & Related papers (2025-06-18T13:58:45Z) - Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text Generation Systems [55.99010491370177]
How to intervene on such system outputs to mitigate anthropomorphic behaviors and their attendant harmful outcomes remains understudied.<n>We compile an inventory of interventions grounded both in prior literature and a crowdsourcing study where participants edited system outputs to make them less human-like.<n>We also develop a conceptual framework to help characterize the landscape of possible interventions, articulate distinctions between different types of interventions, and provide a theoretical basis for evaluating the effectiveness of different interventions.
arXiv Detail & Related papers (2025-02-19T18:06:37Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - Learning Models of Individual Behavior in Chess [4.793072503820555]
We develop highly accurate predictive models of individual human behavior in chess.
Our work demonstrates a way to bring AI systems into better alignment with the behavior of individual people.
arXiv Detail & Related papers (2020-08-23T18:24:21Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.