Predicting Terrorist Attacks in the United States using Localized News
Data
- URL: http://arxiv.org/abs/2201.04292v2
- Date: Fri, 14 Jan 2022 01:49:31 GMT
- Title: Predicting Terrorist Attacks in the United States using Localized News
Data
- Authors: Steven J. Krieg, Christian W. Smith, Rusha Chatterjee, Nitesh V.
Chawla
- Abstract summary: Terrorism is a major problem worldwide, causing thousands of fatalities and billions of dollars in damage every year.
We present a set of machine learning models that learn from localized news data in order to predict whether a terrorist attack will occur on a given calendar date and in a given state.
The best model--a Random Forest that learns from a novel variable-length moving average representation of the feature space--scores $>.667$ on four of the five states that were impacted most by terrorism between 2015 and 2018.
- Score: 13.164412455321907
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Terrorism is a major problem worldwide, causing thousands of fatalities and
billions of dollars in damage every year. Toward the end of better
understanding and mitigating these attacks, we present a set of machine
learning models that learn from localized news data in order to predict whether
a terrorist attack will occur on a given calendar date and in a given state.
The best model--a Random Forest that learns from a novel variable-length moving
average representation of the feature space--achieves area under the receiver
operating characteristic scores $> .667$ on four of the five states that were
impacted most by terrorism between 2015 and 2018. Our key findings include that
modeling terrorism as a set of independent events, rather than as a continuous
process, is a fruitful approach--especially when the events are sparse and
dissimilar. Additionally, our results highlight the need for localized models
that account for differences between locations. From a machine learning
perspective, we found that the Random Forest model outperformed several deep
models on our multimodal, noisy, and imbalanced data set, thus demonstrating
the efficacy of our novel feature representation method in such a context. We
also show that its predictions are relatively robust to time gaps between
attacks and observed characteristics of the attacks. Finally, we analyze
factors that limit model performance, which include a noisy feature space and
small amount of available data. These contributions provide an important
foundation for the use of machine learning in efforts against terrorism in the
United States and beyond.
Related papers
- Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - Holistic risk assessment of inference attacks in machine learning [4.493526120297708]
This paper performs a holistic risk assessment of different inference attacks against Machine Learning models.
A total of 12 target models using three model architectures, including AlexNet, ResNet18 and Simple CNN, are trained on four datasets.
arXiv Detail & Related papers (2022-12-15T08:14:18Z) - Membership-Doctor: Comprehensive Assessment of Membership Inference
Against Machine Learning Models [11.842337448801066]
We present a large-scale measurement of different membership inference attacks and defenses.
We find that some assumptions of the threat model, such as same-architecture and same-distribution between shadow and target models, are unnecessary.
We are also the first to execute attacks on the real-world data collected from the Internet, instead of laboratory datasets.
arXiv Detail & Related papers (2022-08-22T17:00:53Z) - Spatio-temporal extreme event modeling of terror insurgencies [0.7874708385247353]
This paper introduces a self-exciting model for attacks whose inhomogeneous intensity is written as a triggering function.
By inferring the parameters of this model, we highlight specific space-time areas in which attacks are likely to occur.
We show that our model is able to predict the intensity of future attacks for 2019-2021.
arXiv Detail & Related papers (2021-10-15T20:50:24Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Adversarial Fooling Beyond "Flipping the Label" [54.23547006072598]
CNNs show near human or better than human performance in many critical tasks.
These attacks are potentially dangerous in real-life deployments.
We present a comprehensive analysis of several important adversarial attacks over a set of distinct CNN architectures.
arXiv Detail & Related papers (2020-04-27T13:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.