A Call to Arms: AI Should be Critical for Social Media Analysis of
Conflict Zones
- URL: http://arxiv.org/abs/2311.00810v1
- Date: Wed, 1 Nov 2023 19:49:32 GMT
- Title: A Call to Arms: AI Should be Critical for Social Media Analysis of
Conflict Zones
- Authors: Afia Abedin, Abdul Bais, Cody Buntain, Laura Courchesne, Brian
McQuinn, Matthew E. Taylor, Muhib Ullah
- Abstract summary: This paper presents preliminary, transdisciplinary work using computer vision to identify specific weapon systems and the insignias of the armed groups using them.
There is potential to not only track how weapons are distributed through networks of armed units but also to track which types of weapons are being used by the different types of state and non-state military actors in Ukraine.
Such a system could ultimately be used to understand conflicts in real-time, including where humanitarian and medical aid is most needed.
- Score: 5.479613761646247
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The massive proliferation of social media data represents a transformative
moment in conflict studies. This data can provide unique insights into the
spread and use of weaponry, but the scale and types of data are problematic for
traditional open-source intelligence. This paper presents preliminary,
transdisciplinary work using computer vision to identify specific weapon
systems and the insignias of the armed groups using them. There is potential to
not only track how weapons are distributed through networks of armed units but
also to track which types of weapons are being used by the different types of
state and non-state military actors in Ukraine. Such a system could ultimately
be used to understand conflicts in real-time, including where humanitarian and
medical aid is most needed. We believe that using AI to help automate such
processes should be a high-priority goal for our community, with near-term
real-world payoffs.
Related papers
- Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting [0.0]
We show that the inability to prevent personally identifiable information from contributing to ISTAR capabilities may lead to the use and proliferation of military AI technologies by adversaries.
We conclude that in order to secure military systems and limit the proliferation of AI armaments, it may be necessary to insulate military AI systems and personal data from commercial foundation models.
arXiv Detail & Related papers (2024-10-18T19:04:30Z) - The GPT Dilemma: Foundation Models and the Shadow of Dual-Use [0.0]
This paper examines the dual-use challenges of foundation models and the risks they pose for international security.
The paper analyzes four critical factors in the development cycle of foundation models: model inputs, capabilities, system use cases, and system deployment.
Using the Intermediate-Range Nuclear Forces (INF) Treaty as a case study, this paper proposes several strategies to mitigate the associated risks.
arXiv Detail & Related papers (2024-07-29T22:36:27Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - Killer Apps: Low-Speed, Large-Scale AI Weapons [2.2899177316144943]
Artificial Intelligence (AI) and Machine Learning (ML) advancements present new challenges and opportunities in warfare and security.
This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.
arXiv Detail & Related papers (2024-01-14T12:09:40Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Artificial Intelligence and Arms Control [0.0]
The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems.
This paper argues that while a ban on all military applications of AI is likely infeasible, there may be specific cases where arms control is possible.
arXiv Detail & Related papers (2022-10-22T16:09:41Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - On Adversarial Examples and Stealth Attacks in Artificial Intelligence
Systems [62.997667081978825]
We present a formal framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems.
The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification.
The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself.
arXiv Detail & Related papers (2020-04-09T10:56:53Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.