Being Right for Whose Right Reasons?
- URL: http://arxiv.org/abs/2306.00639v2
- Date: Fri, 13 Oct 2023 14:28:03 GMT
- Title: Being Right for Whose Right Reasons?
- Authors: Terne Sasha Thorn Jakobsen, Laura Cabello, Anders S{\o}gaard
- Abstract summary: This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information.
We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups.
We find that models are biased towards aligning best with older and/or white annotators.
- Score: 11.120861224127303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability methods are used to benchmark the extent to which model
predictions align with human rationales i.e., are 'right for the right
reasons'. Previous work has failed to acknowledge, however, that what counts as
a rationale is sometimes subjective. This paper presents what we think is a
first of its kind, a collection of human rationale annotations augmented with
the annotators demographic information. We cover three datasets spanning
sentiment analysis and common-sense reasoning, and six demographic groups
(balanced across age and ethnicity). Such data enables us to ask both what
demographics our predictions align with and whose reasoning patterns our
models' rationales align with. We find systematic inter-group annotator
disagreement and show how 16 Transformer-based models align better with
rationales provided by certain demographic groups: We find that models are
biased towards aligning best with older and/or white annotators. We zoom in on
the effects of model size and model distillation, finding -- contrary to our
expectations -- negative correlations between model size and rationale
agreement as well as no evidence that either model size or model distillation
improves fairness.
Related papers
- Reasoning Towards Fairness: Mitigating Bias in Language Models through Reasoning-Guided Fine-Tuning [12.559028963968247]
We investigate the crucial relationship between a model's reasoning ability and fairness.
We find that larger models with stronger reasoning abilities exhibit substantially lower stereotypical bias.
We introduce ReGiFT, a novel approach that extracts structured reasoning traces from advanced reasoning models and infuses them into models that lack such capabilities.
arXiv Detail & Related papers (2025-04-08T03:21:51Z) - Exploring Bias in over 100 Text-to-Image Generative Models [49.60774626839712]
We investigate bias trends in text-to-image generative models over time, focusing on the increasing availability of models through open platforms like Hugging Face.
We assess bias across three key dimensions: (i) distribution bias, (ii) generative hallucination, and (iii) generative miss-rate.
Our findings indicate that artistic and style-transferred models exhibit significant bias, whereas foundation models, benefiting from broader training distributions, are becoming progressively less biased.
arXiv Detail & Related papers (2025-03-11T03:40:44Z) - Fact-or-Fair: A Checklist for Behavioral Testing of AI Models on Fairness-Related Queries [85.909363478929]
In this study, we focus on 19 real-world statistics collected from authoritative sources.
We develop a checklist comprising objective and subjective queries to analyze behavior of large language models.
We propose metrics to assess factuality and fairness, and formally prove the inherent trade-off between these two aspects.
arXiv Detail & Related papers (2025-02-09T10:54:11Z) - "Patriarchy Hurts Men Too." Does Your Model Agree? A Discussion on Fairness Assumptions [3.706222947143855]
In the context of group fairness, this approach often obscures implicit assumptions about how bias is introduced into the data.
We are assuming that the biasing process is a monotonic function of the fair scores, dependent solely on the sensitive attribute.
Either the behavior of the biasing process is more complex than mere monotonicity, which means we need to identify and reject our implicit assumptions.
arXiv Detail & Related papers (2024-08-01T07:06:30Z) - Less can be more: representational vs. stereotypical gender bias in facial expression recognition [3.9698529891342207]
Machine learning models can inherit biases from their training data, leading to discriminatory or inaccurate predictions.
This paper investigates the propagation of demographic biases from datasets into machine learning models.
We focus on the gender demographic component, analyzing two types of bias: representational and stereotypical.
arXiv Detail & Related papers (2024-06-25T09:26:49Z) - Quantifying Bias in Text-to-Image Generative Models [49.60774626839712]
Bias in text-to-image (T2I) models can propagate unfair social representations and may be used to aggressively market ideas or push controversial agendas.
Existing T2I model bias evaluation methods only focus on social biases.
We propose an evaluation methodology to quantify general biases in T2I generative models, without any preconceived notions.
arXiv Detail & Related papers (2023-12-20T14:26:54Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Achieving Counterfactual Fairness with Imperfect Structural Causal Model [11.108866104714627]
We propose a novel minimax game-theoretic model for counterfactual fairness.
We also theoretically prove the error bound of the proposed minimax model.
Empirical experiments on multiple real-world datasets illustrate our superior performance in both accuracy and fairness.
arXiv Detail & Related papers (2023-03-26T09:37:29Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Fairness-aware Summarization for Justified Decision-Making [16.47665757950391]
We focus on the problem of (un)fairness in the justification of the text-based neural models.
We propose a fairness-aware summarization mechanism to detect and counteract the bias in such models.
arXiv Detail & Related papers (2021-07-13T17:04:10Z) - Why do classifier accuracies show linear trends under distribution
shift? [58.40438263312526]
accuracies of models on one data distribution are approximately linear functions of the accuracies on another distribution.
We assume the probability that two models agree in their predictions is higher than what we can infer from their accuracy levels alone.
We show that a linear trend must occur when evaluating models on two distributions unless the size of the distribution shift is large.
arXiv Detail & Related papers (2020-12-31T07:24:30Z) - To what extent do human explanations of model behavior align with actual
model behavior? [91.67905128825402]
We investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions.
We defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words.
We find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI.
arXiv Detail & Related papers (2020-12-24T17:40:06Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.