Proxy Discrimination After Students for Fair Admissions
- URL: http://arxiv.org/abs/2501.03946v2
- Date: Mon, 21 Apr 2025 12:23:54 GMT
- Title: Proxy Discrimination After Students for Fair Admissions
- Authors: Frank Fagan,
- Abstract summary: Article develops a test for regulating the use of variables that proxy for race and other protected classes and classifications.<n>It suggests that lawmakers can develop caps to permissible proxy power over time, as courts and algorithm builders learn more about the power of variables.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Today, there is no clear legal test for regulating the use of variables that proxy for race and other protected classes and classifications. This Article develops such a test. Decision tools that use proxies are narrowly tailored when they exhibit the weakest total proxy power. The test is necessarily comparative. Thus, if two algorithms predict loan repayment or university academic performance with identical accuracy rates, but one uses zip code and the other does not, then the second algorithm can be said to have deployed a more equitable means for achieving the same result as the first algorithm. Scenarios in which two algorithms produce comparable and non-identical results present a greater challenge. This Article suggests that lawmakers can develop caps to permissible proxy power over time, as courts and algorithm builders learn more about the power of variables. Finally, the Article considers who should bear the burden of producing less discriminatory alternatives and suggests plaintiffs remain in the best position to keep defendants honest - so long as testing data is made available.
Related papers
- Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two principled algorithms for the test-time compute of large language models.
We prove theoretically that the failure probability of one algorithm decays to zero exponentially as its test-time compute grows.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Bidirectional Decoding: Improving Action Chunking via Guided Test-Time Sampling [51.38330727868982]
We show how action chunking impacts the divergence between a learner and a demonstrator.
We propose Bidirectional Decoding (BID), a test-time inference algorithm that bridges action chunking with closed-loop adaptation.
Our method boosts the performance of two state-of-the-art generative policies across seven simulation benchmarks and two real-world tasks.
arXiv Detail & Related papers (2024-08-30T15:39:34Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Designing Equitable Algorithms [1.9006392177894293]
Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
arXiv Detail & Related papers (2023-02-17T22:00:44Z) - Exploiting Contrastive Learning and Numerical Evidence for Confusing
Legal Judgment Prediction [46.71918729837462]
Given the fact description text of a legal case, legal judgment prediction aims to predict the case's charge, law article and penalty term.
Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss.
We propose a moco-based supervised contrastive learning to learn distinguishable representations.
We further enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model.
arXiv Detail & Related papers (2022-11-15T15:53:56Z) - Stochastic Differentially Private and Fair Learning [7.971065005161566]
We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
arXiv Detail & Related papers (2022-10-17T06:54:57Z) - Weak Proxies are Sufficient and Preferable for Fairness with Missing
Sensitive Attributes [25.730297492625507]
We develop an algorithm that is able to measure fairness (provably) accurately with only three properly identified proxies.
Our results imply a set of practical guidelines for practitioners on how to use proxies properly.
arXiv Detail & Related papers (2022-10-06T19:25:29Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Efficient First-Order Contextual Bandits: Prediction, Allocation, and
Triangular Discrimination [82.52105963476703]
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise.
First-order guarantees are relatively well understood in statistical and online learning.
We show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees.
arXiv Detail & Related papers (2021-07-05T19:20:34Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Pursuing Open-Source Development of Predictive Algorithms: The Case of
Criminal Sentencing Algorithms [0.0]
We argue that open-source algorithm development should be the standard in highly consequential contexts.
We suggest these issues are exacerbated by the proprietary and expensive nature of virtually all widely used criminal sentencing algorithms.
arXiv Detail & Related papers (2020-11-12T14:53:43Z) - Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer
Proxies [65.92826041406802]
We propose a Proxy-based deep Graph Metric Learning approach from the perspective of graph classification.
Multiple global proxies are leveraged to collectively approximate the original data points for each class.
We design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels.
arXiv Detail & Related papers (2020-10-26T14:52:42Z) - Transparency Tools for Fairness in AI (Luskin) [12.158766675246337]
We propose new tools for assessing and correcting fairness and bias in AI algorithms.
The three tools are: - A new definition of fairness called "controlled fairness" with respect to choices of protected features and filters.
They are useful for understanding various dimensions of bias, and that in practice the algorithms are effective in starkly reducing a given observed bias when tested on new data.
arXiv Detail & Related papers (2020-07-09T00:21:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.