Algorithmic Transparency and Manipulation
- URL: http://arxiv.org/abs/2311.13286v1
- Date: Wed, 22 Nov 2023 10:09:06 GMT
- Title: Algorithmic Transparency and Manipulation
- Authors: Michael Klenk
- Abstract summary: A series of recent papers raises worries about the manipulative potential of algorithmic transparency.
This paper draws attention to the indifference view of manipulation, which explains better than the vulnerability view.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A series of recent papers raises worries about the manipulative potential of
algorithmic transparency. But while the concern is apt and relevant, it is
based on a fraught understanding of manipulation. Therefore, this paper draws
attention to the indifference view of manipulation, which explains better than
the vulnerability view why algorithmic transparency has manipulative potential.
The paper also raises pertinent research questions for future studies of
manipulation in the context of algorithmic transparency.
Related papers
- Establishing a leader in a pairwise comparisons method [0.2678472239880052]
We show two algorithms that can be used to launch a manipulation attack.
They allow for equating the weights of two selected alternatives in the pairwise comparison method and, consequently, choosing a leader.
arXiv Detail & Related papers (2024-03-21T23:42:00Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Disagreement amongst counterfactual explanations: How transparency can
be deceptive [0.0]
Counterfactual explanations are increasingly used as Explainable Artificial Intelligence technique.
Not every algorithm creates uniform explanations for the same instance.
Ethical issues arise when malicious agents use this diversity to fairwash an unfair machine learning model.
arXiv Detail & Related papers (2023-04-25T09:15:37Z) - What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - Learning Losses for Strategic Classification [5.812499828391904]
We take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule.
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph.
arXiv Detail & Related papers (2022-03-25T02:26:16Z) - Towards Generalizable and Robust Face Manipulation Detection via
Bag-of-local-feature [55.47546606878931]
We propose a novel method for face manipulation detection, which can improve the generalization ability and robustness by bag-of-local-feature.
Specifically, we extend Transformers using bag-of-feature approach to encode inter-patch relationships, allowing it to learn local forgery features without any explicit supervision.
arXiv Detail & Related papers (2021-03-14T12:50:48Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - MailLeak: Obfuscation-Robust Character Extraction Using Transfer
Learning [10.097647847497116]
The presented method is an example of a potential threat to current postal services.
This paper both analyzes the efficiency of the given algorithm and suggests countermeasures to prevent such threats from occurring.
arXiv Detail & Related papers (2020-12-22T01:14:28Z) - Algorithmic Transparency with Strategic Users [9.289838852590732]
We show that even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
We show that, in some cases, even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
arXiv Detail & Related papers (2020-08-21T03:10:42Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.