Spotlight Session on Autonomous Weapons Systems at ICRC 34th International Conference
- URL: http://arxiv.org/abs/2411.08890v1
- Date: Mon, 28 Oct 2024 05:36:41 GMT
- Title: Spotlight Session on Autonomous Weapons Systems at ICRC 34th International Conference
- Authors: Susannah Kate Conroy,
- Abstract summary: Governments are responsible for setting requirements for weapons systems.
They are responsible for driving ethicality as well as lethality.
The UN can advocate for compliance with IHL, human rights, human-centred use of weapons systems.
- Score: 0.0
- License:
- Abstract: Autonomous weapons systems (AWS) change the way humans make decisions, the effect of those decisions and who is accountable for decisions made. We must remain vigilant, informed and human-centred as we tackle our deliberations on developing norms regarding their development, use and justification. Ways to enhance compliance in international humanitarian law (IHL) include: Training weapons decision makers in IHL; developing best practice in weapons reviews including requirements for industry to ensure that any new weapon, means or method of warfare is capable of being used lawfully; develop human-centred test and evaluation methods; invest in digital infrastructure to increase knowledge of the civilian environment in a conflict and its dynamics; invest in research on the real effects and consequences of civilian harms to the achievement of military and political objectives; improve secure communications between stakeholders in a conflict; and finally to upskill governments and NGOs in what is technically achievable with emerging technologies so that they can contribute to system requirements, test and evaluation protocols and operational rules of use and engagement. Governments are responsible for setting requirements for weapons systems. They are responsible for driving ethicality as well as lethality. Governments can require systems to be made and used to better protect civilians and protected objects. The UN can advocate for compliance with IHL, human rights, human-centred use of weapons systems and improved mechanisms to monitor and trace military decision making including those decisions affected by autonomous functionality.
Related papers
- Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI [0.0]
We propose a three-stage framework for evaluating human rights concerns in the design, deployment, and use of military AI.
By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
arXiv Detail & Related papers (2024-11-10T02:27:01Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting [0.0]
We show that the inability to prevent personally identifiable information from contributing to ISTAR capabilities may lead to the use and proliferation of military AI technologies by adversaries.
We conclude that in order to secure military systems and limit the proliferation of AI armaments, it may be necessary to insulate military AI systems and personal data from commercial foundation models.
arXiv Detail & Related papers (2024-10-18T19:04:30Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Commercial AI, Conflict, and Moral Responsibility: A theoretical
analysis and practical approach to the moral responsibilities associated with
dual-use AI technology [2.050345881732981]
We argue that stakeholders involved in the AI system lifecycle are morally responsible for uses of their systems that are reasonably foreseeable.
We present three technically feasible actions that developers of civilian AIs can take to potentially mitigate their moral responsibility.
arXiv Detail & Related papers (2024-01-30T18:09:45Z) - Meaningful human command: Advance control directives as a method to
enable moral and legal responsibility for autonomous weapons systems [0.0]
This chapter considers whether humans can authorise actions for autonomous systems by the prior establishment of a contract.
The medical legal precdent found in 'advance care directives' suggests how the time-consuming deliberative process may be achievable outside real time.
The chapter proposes 'autonomy command' scaffolded and legitimised through the construction of ACD ahead of the deployment of autonomous systems.
arXiv Detail & Related papers (2023-03-13T02:12:51Z) - Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams [0.0]
This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams.
increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments.
This chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams.
arXiv Detail & Related papers (2022-10-31T10:18:20Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.