Leveraging Contextual Counterfactuals Toward Belief Calibration
- URL: http://arxiv.org/abs/2307.06513v1
- Date: Thu, 13 Jul 2023 01:22:18 GMT
- Title: Leveraging Contextual Counterfactuals Toward Belief Calibration
- Authors: Qiuyi (Richard) Zhang, Michael S. Lee, Sherol Chen
- Abstract summary: meta-alignment problem is that human beliefs are diverse and not aligned across populations.
In high regret situations, we observe that contextual counterfactuals and recourse costs are important in updating a decision maker's beliefs and the strengths to which such beliefs are held.
We introduce the belief calibration cycle' framework to more holistically calibrate this diversity of beliefs with context-driven counterfactual reasoning.
- Score: 1.418033127602866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beliefs and values are increasingly being incorporated into our AI systems
through alignment processes, such as carefully curating data collection
principles or regularizing the loss function used for training. However, the
meta-alignment problem is that these human beliefs are diverse and not aligned
across populations; furthermore, the implicit strength of each belief may not
be well calibrated even among humans, especially when trying to generalize
across contexts. Specifically, in high regret situations, we observe that
contextual counterfactuals and recourse costs are particularly important in
updating a decision maker's beliefs and the strengths to which such beliefs are
held. Therefore, we argue that including counterfactuals is key to an accurate
calibration of beliefs during alignment. To do this, we first segment belief
diversity into two categories: subjectivity (across individuals within a
population) and epistemic uncertainty (within an individual across different
contexts). By leveraging our notion of epistemic uncertainty, we introduce `the
belief calibration cycle' framework to more holistically calibrate this
diversity of beliefs with context-driven counterfactual reasoning by using a
multi-objective optimization. We empirically apply our framework for finding a
Pareto frontier of clustered optimal belief strengths that generalize across
different contexts, demonstrating its efficacy on a toy dataset for credit
decisions.
Related papers
- Cocoon: Robust Multi-Modal Perception with Uncertainty-Aware Sensor Fusion [26.979291099052194]
We introduce Cocoon, an object- and feature-level uncertainty-aware fusion framework.
Key innovation lies in uncertainty quantification for heterogeneous representations.
Cocoon consistently outperforms existing static and adaptive methods in both normal and challenging conditions.
arXiv Detail & Related papers (2024-10-16T14:10:53Z) - Navigating Conflicting Views: Harnessing Trust for Learning [5.4486293124577125]
We develop a computational trust-based discounting method to enhance the existing trustworthy framework.
We evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called Multi-View Agreement with Ground Truth.
arXiv Detail & Related papers (2024-06-03T03:22:18Z) - Scalarisation-based risk concepts for robust multi-objective optimisation [4.12484724941528]
We study the multi-objective case of this problem.
We identify that the majority of all robust multi-objective algorithms rely on two key operations: robustification and scalarisation.
As these operations are not necessarily commutative, the order that they are performed in has an impact on the resulting solutions.
arXiv Detail & Related papers (2024-05-16T16:11:00Z) - Longitudinal Counterfactuals: Constraints and Opportunities [59.11233767208572]
We propose using longitudinal data to assess and improve plausibility in counterfactuals.
We develop a metric that compares longitudinal differences to counterfactual differences, allowing us to evaluate how similar a counterfactual is to prior observed changes.
arXiv Detail & Related papers (2024-02-29T20:17:08Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Zero-shot Faithful Factual Error Correction [53.121642212060536]
Faithfully correcting factual errors is critical for maintaining the integrity of textual knowledge bases and preventing hallucinations in sequence-to-sequence models.
We present a zero-shot framework that formulates questions about input claims, looks for correct answers in the given evidence, and assesses the faithfulness of each correction based on its consistency with the evidence.
arXiv Detail & Related papers (2023-05-13T18:55:20Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.