Machine Learning Featurizations for AI Hacking of Political Systems
- URL: http://arxiv.org/abs/2110.09231v1
- Date: Fri, 8 Oct 2021 16:51:31 GMT
- Title: Machine Learning Featurizations for AI Hacking of Political Systems
- Authors: Nathan E Sanders, Bruce Schneier
- Abstract summary: In the recent essay "The Coming AI Hackers," Schneier proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems.
This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" frameworks for AI hacking.
We develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: What would the inputs be to a machine whose output is the destabilization of
a robust democracy, or whose emanations could disrupt the political power of
nations? In the recent essay "The Coming AI Hackers," Schneier (2021) proposed
a future application of artificial intelligences to discover, manipulate, and
exploit vulnerabilities of social, economic, and political systems at speeds
far greater than humans' ability to recognize and respond to such threats. This
work advances the concept by applying to it theory from machine learning,
hypothesizing some possible "featurization" (input specification and
transformation) frameworks for AI hacking. Focusing on the political domain, we
develop graph and sequence data representations that would enable the
application of a range of deep learning models to predict attributes and
outcomes of political systems. We explore possible data models, datasets,
predictive tasks, and actionable applications associated with each framework.
We speculate about the likely practical impact and feasibility of such models,
and conclude by discussing their ethical implications.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI and Social Theory [0.0]
We sketch a programme for AI driven social theory, starting by defining what we mean by artificial intelligence (AI)
We then lay out our model for how AI based models can draw on the growing availability of digital data to help test the validity of different social theories based on their predictive power.
arXiv Detail & Related papers (2024-07-07T12:26:16Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Machines and Influence [0.0]
This paper surveys AI capabilities and tackles this very issue.
We introduce a Matrix of Machine Influence to frame and navigate the adversarial applications of AI.
We suggest that better regulation and management of information systems can more optimally offset the risks of AI.
arXiv Detail & Related papers (2021-11-26T08:58:09Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Next Wave Artificial Intelligence: Robust, Explainable, Adaptable,
Ethical, and Accountable [5.4138734778206]
Deep neural networks have led to many successes and new capabilities in computer vision, speech recognition, language processing, game-playing, and robotics.
A concerning limitation is that even the most successful of today's AI systems suffer from brittleness.
AI systems also can absorb biases-based on gender, race, or other factors-from their training data and further magnify these biases in their subsequent decision-making.
arXiv Detail & Related papers (2020-12-11T00:50:09Z) - Politics of Adversarial Machine Learning [0.7837881800517111]
adversarial machine-learning attacks and defenses have political dimensions.
They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them.
We show how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems.
arXiv Detail & Related papers (2020-02-01T01:15:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.