Politics of Adversarial Machine Learning
- URL: http://arxiv.org/abs/2002.05648v3
- Date: Sun, 26 Apr 2020 04:59:52 GMT
- Title: Politics of Adversarial Machine Learning
- Authors: Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar
- Abstract summary: adversarial machine-learning attacks and defenses have political dimensions.
They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them.
We show how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems.
- Score: 0.7837881800517111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In addition to their security properties, adversarial machine-learning
attacks and defenses have political dimensions. They enable or foreclose
certain options for both the subjects of the machine learning systems and for
those who deploy them, creating risks for civil liberties and human rights. In
this paper, we draw on insights from science and technology studies,
anthropology, and human rights literature, to inform how defenses against
adversarial attacks can be used to suppress dissent and limit attempts to
investigate machine learning systems. To make this concrete, we use real-world
examples of how attacks such as perturbation, model inversion, or membership
inference can be used for socially desirable ends. Although the predictions of
this analysis may seem dire, there is hope. Efforts to address human rights
concerns in the commercial spyware industry provide guidance for similar
measures to ensure ML systems serve democratic, not authoritarian ends
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Evaluating the Vulnerabilities in ML systems in terms of adversarial
attacks [0.0]
New adversarial attacks methods may pose challenges to current deep learning cyber defense systems.
Authors explore the consequences of vulnerabilities in AI systems.
It is important to train the AI systems appropriately when they are in testing phase and getting them ready for broader use.
arXiv Detail & Related papers (2023-08-24T16:46:01Z) - Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning
in Cybersecurity Games [1.14219428942199]
We present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning.
This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions.
Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles.
arXiv Detail & Related papers (2023-06-03T17:51:04Z) - Attacks in Adversarial Machine Learning: A Systematic Survey from the
Life-cycle Perspective [69.25513235556635]
Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans.
Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system.
We propose a unified mathematical framework to covering existing attack paradigms.
arXiv Detail & Related papers (2023-02-19T02:12:21Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Machine Learning Featurizations for AI Hacking of Political Systems [0.0]
In the recent essay "The Coming AI Hackers," Schneier proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems.
This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" frameworks for AI hacking.
We develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political systems.
arXiv Detail & Related papers (2021-10-08T16:51:31Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Adversarial for Good? How the Adversarial ML Community's Values Impede
Socially Beneficial Uses of Attacks [1.2664869982542892]
adversarial machine learning (ML) attacks have the potential to be used "for good"
But most research on adversarial ML has not engaged in developing tools for resistance against ML systems.
arXiv Detail & Related papers (2021-07-11T13:51:52Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers [1.3300455020806103]
Machine learning is becoming an ever present part in our lives as many decisions are made by machine learning algorithms.
Decisions are often unfair and discriminating protected groups based on race or gender.
This work aims to give an introduction into discrimination, legislative foundations counter it and strategies to detect and prevent machine learning algorithms from such behavior.
arXiv Detail & Related papers (2018-11-20T12:03:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.