Spatio-temporal extreme event modeling of terror insurgencies
- URL: http://arxiv.org/abs/2110.08363v1
- Date: Fri, 15 Oct 2021 20:50:24 GMT
- Title: Spatio-temporal extreme event modeling of terror insurgencies
- Authors: Lekha Patel, Lyndsay Shand, J. Derek Tucker, Gabriel Huerta
- Abstract summary: This paper introduces a self-exciting model for attacks whose inhomogeneous intensity is written as a triggering function.
By inferring the parameters of this model, we highlight specific space-time areas in which attacks are likely to occur.
We show that our model is able to predict the intensity of future attacks for 2019-2021.
- Score: 0.7874708385247353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extreme events with potential deadly outcomes, such as those organized by
terror groups, are highly unpredictable in nature and an imminent threat to
society. In particular, quantifying the likelihood of a terror attack occurring
in an arbitrary space-time region and its relative societal risk, would
facilitate informed measures that would strengthen national security. This
paper introduces a novel self-exciting marked spatio-temporal model for attacks
whose inhomogeneous baseline intensity is written as a function of covariates.
Its triggering intensity is succinctly modeled with a Gaussian Process prior
distribution to flexibly capture intricate spatio-temporal dependencies between
an arbitrary attack and previous terror events. By inferring the parameters of
this model, we highlight specific space-time areas in which attacks are likely
to occur. Furthermore, by measuring the outcome of an attack in terms of the
number of casualties it produces, we introduce a novel mixture distribution for
the number of casualties. This distribution flexibly handles low and high
number of casualties and the discrete nature of the data through a {\it
Generalized ZipF} distribution. We rely on a customized Markov chain Monte
Carlo (MCMC) method to estimate the model parameters. We illustrate the
methodology with data from the open source Global Terrorism Database (GTD) that
correspond to attacks in Afghanistan from 2013-2018. We show that our model is
able to predict the intensity of future attacks for 2019-2021 while considering
various covariates of interest such as population density, number of regional
languages spoken, and the density of population supporting the opposing
government.
Related papers
- Metaheuristic approaches to the placement of suicide bomber detectors [0.0]
Suicide bombing is an infamous form of terrorism that is becoming increasingly prevalent in the current era of global terror warfare.
We consider the case of targeted attacks of this kind, and the use of detectors distributed over the area under threat as a protective countermeasure.
To this end, different metaheuristic approaches based on local search and on population-based search are considered and benchmarked against a powerful greedy algorithm.
arXiv Detail & Related papers (2024-05-28T21:14:01Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - An Ordinal Latent Variable Model of Conflict Intensity [59.49424978353101]
The Goldstein scale is a widely-used expert-based measure that scores events on a conflictual-cooperative scale.
This paper takes a latent variable-based approach to measuring conflict intensity.
arXiv Detail & Related papers (2022-10-08T08:59:17Z) - Formulating Robustness Against Unforeseen Attacks [34.302333899025044]
This paper focuses on the scenario where there is a mismatch in the threat model assumed by the defense during training.
We ask the question: if the learner trains against a specific "source" threat model, when can we expect robustness to generalize to a stronger unknown "target" threat model during test-time?
We propose adversarial training with variation regularization (AT-VR) which reduces variation of the feature extractor across the source threat model during training.
arXiv Detail & Related papers (2022-04-28T21:03:36Z) - Predicting Terrorist Attacks in the United States using Localized News
Data [13.164412455321907]
Terrorism is a major problem worldwide, causing thousands of fatalities and billions of dollars in damage every year.
We present a set of machine learning models that learn from localized news data in order to predict whether a terrorist attack will occur on a given calendar date and in a given state.
The best model--a Random Forest that learns from a novel variable-length moving average representation of the feature space--scores $>.667$ on four of the five states that were impacted most by terrorism between 2015 and 2018.
arXiv Detail & Related papers (2022-01-12T03:56:15Z) - Formalizing and Estimating Distribution Inference Risks [11.650381752104298]
We propose a formal and general definition of property inference attacks.
Our results show that inexpensive attacks are as effective as expensive meta-classifier attacks.
We extend the state-of-the-art property inference attack to work on convolutional neural networks.
arXiv Detail & Related papers (2021-09-13T14:54:39Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Extending Adversarial Attacks to Produce Adversarial Class Probability
Distributions [1.439518478021091]
We show that we can approximate any probability distribution for the classes while maintaining a high fooling rate.
Our results demonstrate that we can closely approximate any probability distribution for the classes while maintaining a high fooling rate.
arXiv Detail & Related papers (2020-04-14T09:39:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.