An Information-Flow Perspective on Algorithmic Fairness
- URL: http://arxiv.org/abs/2312.10128v1
- Date: Fri, 15 Dec 2023 14:46:36 GMT
- Title: An Information-Flow Perspective on Algorithmic Fairness
- Authors: Samuel Teuber and Bernhard Beckert
- Abstract summary: This work presents insights gained by investigating the relationship between algorithmic fairness and the concept of secure information flow.
We derive a new quantitative notion of fairness called fairness spread, which can be easily analyzed using quantitative information flow.
- Score: 0.951828574518325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents insights gained by investigating the relationship between
algorithmic fairness and the concept of secure information flow. The problem of
enforcing secure information flow is well-studied in the context of information
security: If secret information may "flow" through an algorithm or program in
such a way that it can influence the program's output, then that is considered
insecure information flow as attackers could potentially observe (parts of) the
secret.
There is a strong correspondence between secure information flow and
algorithmic fairness: if protected attributes such as race, gender, or age are
treated as secret program inputs, then secure information flow means that these
``secret'' attributes cannot influence the result of a program. While most
research in algorithmic fairness evaluation concentrates on studying the impact
of algorithms (often treating the algorithm as a black-box), the concepts
derived from information flow can be used both for the analysis of disparate
treatment as well as disparate impact w.r.t. a structural causal model.
In this paper, we examine the relationship between quantitative as well as
qualitative information-flow properties and fairness. Moreover, based on this
duality, we derive a new quantitative notion of fairness called fairness
spread, which can be easily analyzed using quantitative information flow and
which strongly relates to counterfactual fairness. We demonstrate that
off-the-shelf tools for information-flow properties can be used in order to
formally analyze a program's algorithmic fairness properties, including the new
notion of fairness spread as well as established notions such as demographic
parity.
Related papers
- Sequential Classification of Misinformation [4.557963624437785]
A social media platform may want to distinguish between true", partly-true", and false" information.
In this paper, we consider the problem of online multiclass classification of information flow.
We propose two detection algorithms; the first is based on the well-known multiple sequential probability ratio test, while the second is a novel graph neural network based sequential decision algorithm.
arXiv Detail & Related papers (2024-09-07T15:43:19Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - When Fair Classification Meets Noisy Protected Attributes [8.362098382773265]
This study is the first head-to-head study of fair classification algorithms to compare attribute-reliant, noise-tolerant and attribute-blind algorithms.
Our study reveals that attribute-blind and noise-tolerant fair classifiers can potentially achieve similar level of performance as attribute-reliant algorithms.
arXiv Detail & Related papers (2023-07-06T21:38:18Z) - dugMatting: Decomposed-Uncertainty-Guided Matting [83.71273621169404]
We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
arXiv Detail & Related papers (2023-06-02T11:19:50Z) - Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms [2.5372245630249632]
We show how injustices materialize for stakeholders across three algorithmic stages in the misinformation detection pipeline.
This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with algorithmic misinformation detection.
arXiv Detail & Related papers (2022-04-28T15:31:13Z) - Fine-Grained Neural Network Explanation by Identifying Input Features
with Predictive Information [53.28701922632817]
We propose a method to identify features with predictive information in the input domain.
The core idea of our method is leveraging a bottleneck on the input that only lets input features associated with predictive latent features pass through.
arXiv Detail & Related papers (2021-10-04T14:13:42Z) - A Bayesian Framework for Information-Theoretic Probing [51.98576673620385]
We argue that probing should be seen as approximating a mutual information.
This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences.
This paper proposes a new framework to measure what we term Bayesian mutual information.
arXiv Detail & Related papers (2021-09-08T18:08:36Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Fairness Perception from a Network-Centric Perspective [12.261689483681147]
We introduce a novel yet intuitive function known as network-centric fairness perception.
We show how the function can be extended to a group fairness metric known as fairness visibility.
We illustrate a potential pitfall of the fairness visibility measure that can be exploited to mislead individuals into perceiving that the algorithmic decisions are fair.
arXiv Detail & Related papers (2020-10-07T06:35:03Z) - Uncertainty Quantification for Deep Context-Aware Mobile Activity
Recognition and Unknown Context Discovery [85.36948722680822]
We develop a context-aware mixture of deep models termed the alpha-beta network.
We improve accuracy and F score by 10% by identifying high-level contexts.
In order to ensure training stability, we have used a clustering-based pre-training in both public and in-house datasets.
arXiv Detail & Related papers (2020-03-03T19:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.