A Framework for Operations Research Model Use in Resilience to
Fundamental Surprise Events: Observations from University Operations during
COVID-19
- URL: http://arxiv.org/abs/2210.08963v1
- Date: Tue, 20 Sep 2022 10:46:47 GMT
- Title: A Framework for Operations Research Model Use in Resilience to
Fundamental Surprise Events: Observations from University Operations during
COVID-19
- Authors: Thomas C. Sharkey, Steven Foster, Sudeep Hegde, Mary E. Kurz, and
Emily L. Tucker
- Abstract summary: Operations research (OR) approaches have been increasingly applied to model the resilience of a system to surprise events.
We provide a framework for how OR models were applied by a university in response to the pandemic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operations research (OR) approaches have been increasingly applied to model
the resilience of a system to surprise events. In order to model a surprise
event, one must have an understanding of its characteristics, which then become
parameters, decisions, and/or constraints in the resulting model. This means
that these models cannot (directly) handle fundamental surprise events, which
are events that could not be defined before they happen. However, OR models may
be adapted, improvised, or created during a fundamental surprise event, such as
the COVID-19 pandemic, to help respond to it. We provide a framework for how OR
models were applied by a university in response to the pandemic, thus helping
to understand the role of OR models during fundamental surprise events. Our
framework includes the following adaptations: adapting data, adding
constraints, model switching, pulling from the modeling toolkit, and creating a
new model. Each of these adaptations is formally presented, with supporting
evidence gathered through interviews with modelers and users involved in the
university response to the pandemic. We discuss the implications of this
framework for both OR and resilience.
Related papers
- Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Limitations of Agents Simulated by Predictive Models [1.6649383443094403]
We outline two structural reasons for why predictive models can fail when turned into agents.
We show that both of those failures are fixed by including a feedback loop from the environment.
Our treatment provides a unifying view of those failure modes, and informs the question of why fine-tuning offline learned policies with online learning makes them more effective.
arXiv Detail & Related papers (2024-02-08T17:08:08Z) - Minimal Value-Equivalent Partial Models for Scalable and Robust Planning
in Lifelong Reinforcement Learning [56.50123642237106]
Common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment.
We argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios.
We propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-minimal partial models"
arXiv Detail & Related papers (2023-01-24T16:40:01Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Unifying Epidemic Models with Mixtures [28.771032745045428]
The COVID-19 pandemic has emphasized the need for a robust understanding of epidemic models.
Here, we introduce a simple mixture-based model which bridges the two approaches.
Although the model is non-mechanistic, we show that it arises as the natural outcome of a process based on a networked SIR framework.
arXiv Detail & Related papers (2022-01-07T19:42:05Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Thief, Beware of What Get You There: Towards Understanding Model
Extraction Attack [13.28881502612207]
In some scenarios, AI models are trained proprietarily, where neither pre-trained models nor sufficient in-distribution data is publicly available.
We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models.
We formulate model extraction attacks into an adaptive framework that captures these factors with deep reinforcement learning.
arXiv Detail & Related papers (2021-04-13T03:46:59Z) - An Optimal Control Approach to Learning in SIDARTHE Epidemic model [67.22168759751541]
We propose a general approach for learning time-variant parameters of dynamic compartmental models from epidemic data.
We forecast the epidemic evolution in Italy and France.
arXiv Detail & Related papers (2020-10-28T10:58:59Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.