Algorithmic neutrality
- URL: http://arxiv.org/abs/2303.05103v3
- Date: Sun, 9 Jun 2024 12:14:12 GMT
- Title: Algorithmic neutrality
- Authors: Milo Phillips-Brown,
- Abstract summary: I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible?
And when we have it in mind, what can we learn about algorithmic bias?
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithms wield increasing control over our lives: over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
Related papers
- Arbitrary state creation via controlled measurement [49.494595696663524]
This algorithm creates an arbitrary $n$-qubit pure quantum superposition state with precision of $m$-decimals.
The algorithm uses one-qubit rotations, Hadamard transformations and C-NOT operations with multi-qubit controls.
arXiv Detail & Related papers (2025-04-13T07:23:50Z) - Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity [0.0]
Reformists have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms.
I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment.
arXiv Detail & Related papers (2024-05-28T17:49:24Z) - Generalized quantum Arimoto-Blahut algorithm and its application to
quantum information bottleneck [55.22418739014892]
We generalize the quantum Arimoto-Blahut algorithm by Ramakrishnan et al.
We apply our algorithm to the quantum information bottleneck with three quantum systems.
Our numerical analysis shows that our algorithm is better than their algorithm.
arXiv Detail & Related papers (2023-11-19T00:06:11Z) - Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial
Corruptions [98.75618795470524]
We study the linear contextual bandit problem in the presence of adversarial corruption.
We propose a new algorithm based on the principle of optimism in the face of uncertainty.
arXiv Detail & Related papers (2022-05-13T17:58:58Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Algorithms are not neutral: Bias in collaborative filtering [0.0]
Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased.
This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases.
Popularity and homogenizing biases have the effect of further marginalizing the already marginal.
arXiv Detail & Related papers (2021-05-03T17:28:43Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Corralling Stochastic Bandit Algorithms [54.10645564702416]
We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward.
We show that the gap between the highest reward and other rewards depends on the gap between the highest reward and other rewards.
arXiv Detail & Related papers (2020-06-16T15:33:12Z) - Underestimation Bias and Underfitting in Machine Learning [2.639737913330821]
What is termed algorithmic bias in machine learning will be due to historic bias in the training data.
Sometimes the bias may be introduced (or at least exacerbated) by the algorithm itself.
In this paper we report on initial research to understand the factors that contribute to bias in classification algorithms.
arXiv Detail & Related papers (2020-05-18T20:01:56Z) - Adversarial Online Learning with Changing Action Sets: Efficient
Algorithms with Approximate Regret Bounds [48.312484940846]
We revisit the problem of online learning with sleeping experts/bandits.
In each time step, only a subset of the actions are available for the algorithm to choose from.
We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems.
arXiv Detail & Related papers (2020-03-07T02:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.