Sequential Processing of Observations in Human Decision-Making Systems
- URL: http://arxiv.org/abs/2301.07767v1
- Date: Wed, 18 Jan 2023 20:22:05 GMT
- Title: Sequential Processing of Observations in Human Decision-Making Systems
- Authors: Nandan Sriranga, Baocheng Geng, Pramod K. Varshney
- Abstract summary: We consider a binary hypothesis testing problem involving a group of human decision-makers.
The humans use a belief model to accumulate the log-likelihood ratios until they cease observing the phenomenon.
The global decision-maker for a binary hypothesis testing problem when the global decision-maker is a machine, fuses the human decisions using the Chair-Varshney rule.
- Score: 27.09995424490989
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we consider a binary hypothesis testing problem involving a
group of human decision-makers. Due to the nature of human behavior, each human
decision-maker observes the phenomenon of interest sequentially up to a random
length of time. The humans use a belief model to accumulate the log-likelihood
ratios until they cease observing the phenomenon. The belief model is used to
characterize the perception of the human decision-maker towards observations at
different instants of time, i.e some decision-makers may assign greater
importance to observations that were observed earlier, rather than later and
vice-versa. The global decision-maker for a binary hypothesis testing problem
when the global decision-maker is a machine, fuses the human decisions using
the Chair-Varshney rule with different weights for the human decisions, where
the weights are determined by the number of observations that were used by the
humans to arrive at their respective decisions.
Related papers
- Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - Personalized Decision Making -- A Conceptual Introduction [8.008051073614174]
We show that by combining experimental and observational studies we can obtain valuable information about individual behavior.
We conclude that by combining experimental and observational studies we can improve decisions over those obtained from experimental studies alone.
arXiv Detail & Related papers (2022-08-19T22:21:29Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Cognitive science as a source of forward and inverse models of human
decisions for robotics and control [13.502912109138249]
We look at how cognitive science can provide forward models of human decision-making.
We highlight approaches that synthesize blackbox and theory-driven modeling.
We aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
arXiv Detail & Related papers (2021-09-01T00:28:28Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Learning the Preferences of Uncertain Humans with Inverse Decision
Theory [10.926992035470372]
We study the setting of inverse decision theory (IDT), a framework where a human is observed making non-sequential binary decisions under uncertainty.
In IDT, the human's preferences are conveyed through their loss function, which expresses a tradeoff between different types of mistakes.
We show that it is actually easier to identify preferences when the decision problem is more uncertain.
arXiv Detail & Related papers (2021-06-19T00:11:13Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Implications of Human Irrationality for Reinforcement Learning [26.76732313120685]
We argue that human decision making may be a better source of ideas for constraining how machine learning problems are defined than would otherwise be the case.
One promising idea concerns human decision making that is dependent on apparently irrelevant aspects of the choice context.
We propose a novel POMDP model for contextual choice tasks and show that, despite the apparent irrationalities, a reinforcement learner can take advantage of the way that humans make decisions.
arXiv Detail & Related papers (2020-06-07T07:44:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.