Machines and Influence
- URL: http://arxiv.org/abs/2111.13365v1
- Date: Fri, 26 Nov 2021 08:58:09 GMT
- Title: Machines and Influence
- Authors: Shashank Yadav
- Abstract summary: This paper surveys AI capabilities and tackles this very issue.
We introduce a Matrix of Machine Influence to frame and navigate the adversarial applications of AI.
We suggest that better regulation and management of information systems can more optimally offset the risks of AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Policymakers face a broader challenge of how to view AI capabilities today
and where does society stand in terms of those capabilities. This paper surveys
AI capabilities and tackles this very issue, exploring it in context of
political security in digital societies. We introduce a Matrix of Machine
Influence to frame and navigate the adversarial applications of AI, and further
extend the ideas of Information Management to better understand contemporary AI
systems deployment as part of a complex information system. Providing a
comprehensive review of man-machine interactions in our networked society and
political systems, we suggest that better regulation and management of
information systems can more optimally offset the risks of AI and utilise the
emerging capabilities which these systems have to offer to policymakers and
political institutions across the world. Hopefully this long essay will actuate
further debates and discussions over these ideas, and prove to be a useful
contribution towards governing the future of AI.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Sociotechnical Implications of Generative Artificial Intelligence for Information Access [4.3867169221012645]
Generative AI technologies may enable new ways to access information and improve effectiveness of existing information retrieval systems.
We present an overview of some of the systemic consequences and risks of employing generative AI in the context of information access.
arXiv Detail & Related papers (2024-05-19T17:04:39Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Multi-AI Complex Systems in Humanitarian Response [0.0]
We describe how multi-AI systems can arise, even in relatively simple real-world humanitarian response scenarios, and lead to potentially emergent and erratic erroneous behavior.
This paper is designed to be a first exposition on this topic in the field of humanitarian response, raising awareness, exploring the possible landscape of this domain, and providing a starting point for future work within the wider community.
arXiv Detail & Related papers (2022-08-24T03:01:21Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Why and How Governments Should Monitor AI Development [0.22395966459254707]
We outline a proposal for improving the governance of artificial intelligence (AI) by investing in government capacity to systematically measure and monitor the capabilities and impacts of AI systems.
It would also create infrastructure that could rapidly identify potential threats or harms that could occur as a consequence of changes in the AI ecosystem.
We discuss this proposal in detail, outlining what specific things governments could focus on measuring and monitoring, and the kinds of benefits this would generate for policymaking.
arXiv Detail & Related papers (2021-08-28T19:41:22Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.