Enhancing Building Safety Design for Active Shooter Incidents: Exploration of Building Exit Parameters using Reinforcement Learning-Based Simulations
- URL: http://arxiv.org/abs/2407.10441v1
- Date: Mon, 15 Jul 2024 05:08:38 GMT
- Title: Enhancing Building Safety Design for Active Shooter Incidents: Exploration of Building Exit Parameters using Reinforcement Learning-Based Simulations
- Authors: Ruying Liu, Wanjing Wu, Burcin Becerik-Gerber, Gale M. Lucas,
- Abstract summary: This study proposes a reinforcement learning-based simulation approach addressing gaps in existing research.
We developed an autonomous agent to simulate an active shooter within a realistic office environment.
- Score: 1.3374504717801061
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the alarming rise in active shooter incidents (ASIs) in the United States, enhancing public safety through building design has become a pressing need. This study proposes a reinforcement learning-based simulation approach addressing gaps in existing research that has neglected the dynamic behaviours of shooters. We developed an autonomous agent to simulate an active shooter within a realistic office environment, aiming to offer insights into the interactions between building design parameters and ASI outcomes. A case study is conducted to quantitatively investigate the impact of building exit numbers (total count of accessible exits) and configuration (arrangement of which exits are available or not) on evacuation and harm rates. Findings demonstrate that greater exit availability significantly improves evacuation outcomes and reduces harm. Exits nearer to the shooter's initial position hold greater importance for accessibility than those farther away. By encompassing dynamic shooter behaviours, this study offers preliminary insights into effective building safety design against evolving threats.
Related papers
- A Multi-agent Simulation for the Mass School Shootings [8.46557643646903]
The increasing frequency of mass school shootings in the United States has been raised as a critical concern.
This study aims to address the challenge of simulating and assessing potential mitigation measures by developing a multi-agent simulation model.
arXiv Detail & Related papers (2024-12-05T05:29:57Z) - Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Adversarial Robustness Unhardening via Backdoor Attacks in Federated
Learning [13.12397828096428]
Adversarial Robustness Unhardening (ARU) is employed by a subset of adversaries to intentionally undermine model robustness during decentralized training.
We present empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-10-17T21:38:41Z) - BARReL: Bottleneck Attention for Adversarial Robustness in Vision-Based
Reinforcement Learning [20.468991996052953]
We investigate the susceptibility of vision-based reinforcement learning agents to gradient-based adversarial attacks.
We show how learned attention maps can be used to recover activations of a convolutional layer.
Across a number of RL environments, BAM-enhanced architectures show increased robustness during inference.
arXiv Detail & Related papers (2022-08-22T17:54:34Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Deep reinforcement learning with a particle dynamics environment applied
to emergency evacuation of a room with obstacles [3.031582944011582]
We develop a deep reinforcement learning algorithm in association with the social force model to train agents to find the fastest evacuation path.
We first show that in the case of a room without obstacles the resulting self-driven force points directly towards the exit as in the social force model.
We show that our method produces similar results with the social force model when the obstacle is convex.
arXiv Detail & Related papers (2020-11-30T19:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.