Inclusive Artificial Intelligence
- URL: http://arxiv.org/abs/2212.12633v1
- Date: Sat, 24 Dec 2022 02:13:26 GMT
- Title: Inclusive Artificial Intelligence
- Authors: Dilip Arumugam, Shi Dong, Benjamin Van Roy
- Abstract summary: Methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual.
We propose an alternative evaluation method that instead prioritizes inclusive AIs.
- Score: 27.09425461169165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prevailing methods for assessing and comparing generative AIs incentivize
responses that serve a hypothetical representative individual. Evaluating
models in these terms presumes homogeneous preferences across the population
and engenders selection of agglomerative AIs, which fail to represent the
diverse range of interests across individuals. We propose an alternative
evaluation method that instead prioritizes inclusive AIs, which provably retain
the requisite knowledge not only for subsequent response customization to
particular segments of the population but also for utility-maximizing
decisions.
Related papers
- Exploring the Lands Between: A Method for Finding Differences between AI-Decisions and Human Ratings through Generated Samples [45.209635328908746]
We propose a method to find samples in the latent space of a generative model.
By presenting those samples to both the decision-making model and human raters, we can identify areas where its decisions align with human intuition.
We apply this method to a face recognition model and collect a dataset of 11,200 human ratings from 100 participants.
arXiv Detail & Related papers (2024-09-19T14:14:08Z) - Aligning Large Language Models from Self-Reference AI Feedback with one General Principle [61.105703857868775]
We propose a self-reference-based AI feedback framework that enables a 13B Llama2-Chat to provide high-quality feedback.
Specifically, we allow the AI to first respond to the user's instructions, then generate criticism of other answers based on its own response as a reference.
Finally, we determine which answer better fits human preferences according to the criticism.
arXiv Detail & Related papers (2024-06-17T03:51:46Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems [3.7846812749505134]
We envision a hybrid participatory system where participants make choices and provide motivations for those choices.
We focus on situations where a conflict is detected between participants' choices and motivations.
We propose methods for estimating value preferences while addressing detected inconsistencies by interacting with the participants.
arXiv Detail & Related papers (2024-02-26T17:16:28Z) - A System's Approach Taxonomy for User-Centred XAI: A Survey [0.6882042556551609]
We propose a unified, inclusive and user-centred taxonomy for XAI based on the principles of General System's Theory.
This provides a basis for evaluating the appropriateness of XAI approaches for all user types, including both developers and end users.
arXiv Detail & Related papers (2023-03-06T00:50:23Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Joint Optimization of AI Fairness and Utility: A Human-Centered Approach [45.04980664450894]
We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives.
We propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.
arXiv Detail & Related papers (2020-02-05T03:31:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.