My Actions Speak Louder Than Your Words: When User Behavior Predicts
Their Beliefs about Agents' Attributes
- URL: http://arxiv.org/abs/2301.09011v1
- Date: Sat, 21 Jan 2023 21:26:32 GMT
- Title: My Actions Speak Louder Than Your Words: When User Behavior Predicts
Their Beliefs about Agents' Attributes
- Authors: Nikolos Gurney and David Pynadath and Ning Wang
- Abstract summary: Behavioral science suggests that people sometimes use irrelevant information.
We identify an instance of this phenomenon, where users who experience better outcomes in a human-agent interaction systematically rated the agent as having better abilities, being more benevolent, and exhibiting greater integrity in a post hoc assessment than users who experienced worse outcome -- which were the result of their own behavior -- with the same agent.
Our analyses suggest the need for augmentation of models so that they account for such biased perceptions as well as mechanisms so that agents can detect and even actively work to correct this and similar biases of users.
- Score: 5.893351309010412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An implicit expectation of asking users to rate agents, such as an AI
decision-aid, is that they will use only relevant information -- ask them about
an agent's benevolence, and they should consider whether or not it was kind.
Behavioral science, however, suggests that people sometimes use irrelevant
information. We identify an instance of this phenomenon, where users who
experience better outcomes in a human-agent interaction systematically rated
the agent as having better abilities, being more benevolent, and exhibiting
greater integrity in a post hoc assessment than users who experienced worse
outcome -- which were the result of their own behavior -- with the same agent.
Our analyses suggest the need for augmentation of models so that they account
for such biased perceptions as well as mechanisms so that agents can detect and
even actively work to correct this and similar biases of users.
Related papers
- Select to Perfect: Imitating desired behavior from large multi-agent data [28.145889065013687]
Desired characteristics for AI agents can be expressed by assigning desirability scores.
We first assess the effect of each individual agent's behavior on the collective desirability score.
We propose the concept of an agent's Exchange Value, which quantifies an individual agent's contribution to the collective desirability score.
arXiv Detail & Related papers (2024-05-06T15:48:24Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - Explaining Agent Behavior with Large Language Models [7.128139268426959]
We propose an approach to generate natural language explanations for an agent's behavior based only on observations of states and actions.
We show how a compact representation of the agent's behavior can be learned and used to produce plausible explanations.
arXiv Detail & Related papers (2023-09-19T06:13:24Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Studying the Effects of Cognitive Biases in Evaluation of Conversational
Agents [10.248512149493443]
We conduct a study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents.
We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias.
arXiv Detail & Related papers (2020-02-18T23:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.