Learning Human-Compatible Representations for Case-Based Decision
Support
- URL: http://arxiv.org/abs/2303.04809v1
- Date: Mon, 6 Mar 2023 19:04:26 GMT
- Title: Learning Human-Compatible Representations for Case-Based Decision
Support
- Authors: Han Liu, Yizhou Tian, Chacha Chen, Shi Feng, Yuxin Chen, Chenhao Tan
- Abstract summary: Algorithmic case-based decision support provides examples to help human make sense of predicted labels.
Human-compatible representations identify nearest neighbors that are perceived as more similar by humans.
- Score: 36.01560961898229
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Algorithmic case-based decision support provides examples to help human make
sense of predicted labels and aid human in decision-making tasks. Despite the
promising performance of supervised learning, representations learned by
supervised models may not align well with human intuitions: what models
consider as similar examples can be perceived as distinct by humans. As a
result, they have limited effectiveness in case-based decision support. In this
work, we incorporate ideas from metric learning with supervised learning to
examine the importance of alignment for effective decision support. In addition
to instance-level labels, we use human-provided triplet judgments to learn
human-compatible decision-focused representations. Using both synthetic data
and human subject experiments in multiple classification tasks, we demonstrate
that such representation is better aligned with human perception than
representation solely optimized for classification. Human-compatible
representations identify nearest neighbors that are perceived as more similar
by humans and allow humans to make more accurate predictions, leading to
substantial improvements in human decision accuracies (17.8% in butterfly vs.
moth classification and 13.2% in pneumonia classification).
Related papers
- Learning to Represent Individual Differences for Choice Decision Making [37.97312716637515]
We use representation learning to characterize individual differences in human performance on an economic decision-making task.
We demonstrate that models using representation learning to capture individual differences consistently improve decision predictions.
Our results propose that representation learning offers a useful and flexible tool to capture individual differences.
arXiv Detail & Related papers (2025-03-27T17:10:05Z) - Turing Representational Similarity Analysis (RSA): A Flexible Method for Measuring Alignment Between Human and Artificial Intelligence [0.62914438169038]
We developed Turing Representational Similarity Analysis (RSA), a method that uses pairwise similarity ratings to quantify alignment between AIs and humans.
We tested this approach on semantic alignment across text and image modalities, measuring how different Large Language and Vision Language Model (LLM and VLM) similarity judgments aligned with human responses at both group and individual levels.
arXiv Detail & Related papers (2024-11-30T20:24:52Z) - Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Learning Human-Aligned Representations with Contrastive Learning and Generative Similarity [9.63129238638334]
Humans rely on effective representations to learn from few examples and abstract useful information from sensory data.
We use a Bayesian notion of generative similarity whereby two data points are considered similar if they are likely to have been sampled from the same distribution.
We demonstrate the utility of our approach by showing that it can be used to capture human-like representations of shape regularity, abstract Euclidean geometric concepts, and semantic hierarchies for natural images.
arXiv Detail & Related papers (2024-05-29T18:01:58Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Decision Theoretic Foundations for Experiments Evaluating Human Decisions [18.27590643693167]
We argue that to attribute loss in human performance to forms of bias, an experiment must provide participants with the information that a rational agent would need to identify the utility-maximizing decision.
As a demonstration, we evaluate the extent to which recent evaluations of decision-making from the literature on AI-assisted decisions achieve these criteria.
arXiv Detail & Related papers (2024-01-25T16:21:37Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine
Similarity on Machine-Assisted Decision-Making [11.143223527623821]
We study how the similarity of human and machine errors influences human perceptions of and interactions with algorithmic decision aids.
We find that (i) people perceive more similar decision aids as more useful, accurate, and predictable, and that (ii) people are more likely to take opposing advice from more similar decision aids.
arXiv Detail & Related papers (2022-09-08T13:50:35Z) - What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
Framework for Explainability Methods [6.232071870655069]
We show that theoretical measures used to score explainability methods poorly reflect the practical usefulness of individual attribution methods in real-world scenarios.
Our results suggest a critical need to develop better explainability methods and to deploy human-centered evaluation approaches.
arXiv Detail & Related papers (2021-12-06T18:36:09Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Differentiable Multi-Granularity Human Representation Learning for
Instance-Aware Human Semantic Parsing [131.97475877877608]
A new bottom-up regime is proposed to learn category-level human semantic segmentation and multi-person pose estimation in a joint and end-to-end manner.
It is a compact, efficient and powerful framework that exploits structural information over different human granularities.
Experiments on three instance-aware human datasets show that our model outperforms other bottom-up alternatives with much more efficient inference.
arXiv Detail & Related papers (2021-03-08T06:55:00Z) - Action similarity judgment based on kinematic primitives [48.99831733355487]
We investigate to which extent a computational model based on kinematics can determine action similarity.
The chosen model has its roots in developmental robotics and performs action classification based on learned kinematic primitives.
The results show that both the model and human performance are highly accurate in an action similarity task based on kinematic-level features.
arXiv Detail & Related papers (2020-08-30T13:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.