Formal specification terminology for demographic agent-based models of
fixed-step single-clocked simulations
- URL: http://arxiv.org/abs/2308.13081v3
- Date: Fri, 20 Oct 2023 14:05:12 GMT
- Title: Formal specification terminology for demographic agent-based models of
fixed-step single-clocked simulations
- Authors: Atiyah Elsheikh
- Abstract summary: This document presents adequate formal terminology for the mathematical specification of a subset of Agent Based Models (ABMs)
The proposed terminology further improves the model understanding and can act as a stand-alone protocol for the specification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This document presents adequate formal terminology for the mathematical
specification of a subset of Agent Based Models (ABMs) in the field of
Demography. The simulation of the targeted ABMs follows a fixedstep
single-clocked pattern. The proposed terminology further improves the model
understanding and can act as a stand-alone protocol for the specification and
optionally the documentation of a significant set of (demographic) ABMs.
Nevertheless, it is imaginable the this terminology can serve as an inspiring
basis for further improvement to the largely-informal widely-used model
documentation and communication O.D.D. protocol [Grimm and et al., 2020,
Amouroux et al., 2010] to reduce many sources of ambiguity which hinder model
replications by other modelers. A published demographic model documentation,
largely simplified version of the Lone Parent Model [Gostoli and Silverman,
2020] is separately published in [Elsheikh, 2023c] as illustration for the
formal terminology presented here. The model was implemented in the Julia
language [Elsheikh, 2023b] based on the Agents.jl julia package [Datseris et
al., 2022].
Related papers
- LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence [61.46575527504109]
LimiX-16M and LimiX-2M treat structured data as a joint distribution over variables and missingness.<n>We evaluate LimiX models across 11 large structured-data benchmarks with broad regimes of sample size, feature dimensionality, class number, categorical-to-numerical feature ratio, missingness, and sample-to-feature ratios.
arXiv Detail & Related papers (2025-09-03T17:39:08Z) - Adapting Definition Modeling for New Languages: A Case Study on Belarusian [2.2120851074630177]
We propose a novel dataset of 43,150 definitions in Belarusian.<n>Our experiments demonstrate that adapting a definition modeling systems requires minimal amounts of data, but that there currently are gaps in what automatic metrics do capture.
arXiv Detail & Related papers (2025-07-13T08:35:23Z) - Learning Diffusion Models with Flexible Representation Guidance [49.26046407886349]
We present a systematic framework for incorporating representation guidance into diffusion models.<n>We introduce two new strategies for enhancing representation alignment in diffusion models.<n>Experiments across image, protein sequence, and molecule generation tasks demonstrate superior performance as well as accelerated training.
arXiv Detail & Related papers (2025-07-11T19:29:02Z) - Large Language Bayes [22.372504018202154]
This paper takes an informal problem description as input, and combines a large language model and a probabilistic programming language.
A posterior over latent variables follows by conditioning on observed data and integrating over formal models.
We show that this produces sensible predictions without the need to specify a formal model.
arXiv Detail & Related papers (2025-04-18T18:30:29Z) - Scaling Diffusion Language Models via Adaptation from Autoregressive Models [105.70889434492143]
Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling.
We show that we can convert AR models ranging from 127M to 7B parameters into diffusion models DiffuGPT and DiffuLLaMA, using less than 200B tokens for training.
Our experimental results reveal that these models outperform earlier DLMs and are competitive with their AR counterparts.
arXiv Detail & Related papers (2024-10-23T14:04:22Z) - Less is More: Making Smaller Language Models Competent Subgraph Retrievers for Multi-hop KGQA [51.3033125256716]
We model the subgraph retrieval task as a conditional generation task handled by small language models.
Our base generative subgraph retrieval model, consisting of only 220M parameters, competitive retrieval performance compared to state-of-the-art models.
Our largest 3B model, when plugged with an LLM reader, sets new SOTA end-to-end performance on both the WebQSP and CWQ benchmarks.
arXiv Detail & Related papers (2024-10-08T15:22:36Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Decoding the Alphabet Soup of Degrees in the United States Postsecondary
Education System Through Hybrid Method: Database and Text Mining [0.0]
This paper proposes a model to predict the levels (e.g., Bachelor, Master, etc.) of postsecondary degree awards that have been ambiguously expressed in the student tracking reports of the National Student Clearinghouse (NSC)
The model was trained with four multi-label datasets of different grades of resolution and returned 97.83% accuracy with the most sophisticated dataset.
arXiv Detail & Related papers (2023-09-06T16:03:14Z) - Specification of MiniDemographicABM.jl: A simplified agent-based
demographic model of the UK [0.0]
This documentation specifies a non-calibrated demographic agent-based model of the UK.
In the presented model, individuals of an initial population are subject to ageing, deaths, births, divorces and marriages.
The model serves as a base implementation to be adjusted to realistic large-scale socio-economics, pandemics or immigration studies.
arXiv Detail & Related papers (2023-07-31T10:28:23Z) - Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling
Model [6.85316573653194]
This paper presents a series of approaches aimed at enhancing the performance of Aspect-Based Sentiment Analysis (ABSA)
We propose a novel end-to-end Semantic Role Labeling model that effectively captures most of the structured semantic information within the Transformer hidden state.
We evaluate the proposed models in two languages, English and Czech, employing ELECTRA-small models.
arXiv Detail & Related papers (2023-07-27T11:28:16Z) - Did the Models Understand Documents? Benchmarking Models for Language
Understanding in Document-Level Relation Extraction [2.4665182280122577]
Document-level relation extraction (DocRE) attracts more research interest recently.
While models achieve consistent performance gains in DocRE, their underlying decision rules are still understudied.
In this paper, we take the first step toward answering this question and then introduce a new perspective on comprehensively evaluating a model.
arXiv Detail & Related papers (2023-06-20T08:52:05Z) - Structured Thoughts Automaton: First Formalized Execution Model for
Auto-Regressive Language Models [0.0]
We introduce a new algorithm for sampling the predictions of LMs, which we use to build a reliable and inspectable execution model.
We introduce a low-level language to write "cognitive program" for this execution model.
arXiv Detail & Related papers (2023-06-16T22:04:50Z) - Progressive Tree-Structured Prototype Network for End-to-End Image
Captioning [74.8547752611337]
We propose a novel Progressive Tree-Structured prototype Network (dubbed PTSN)
PTSN is the first attempt to narrow down the scope of prediction words with appropriate semantics by modeling the hierarchical textual semantics.
Our method achieves a new state-of-the-art performance with 144.2% (single model) and 146.5% (ensemble of 4 models) CIDEr scores on Karpathy' split and 141.4% (c5) and 143.9% (c40) CIDEr scores on the official online test server.
arXiv Detail & Related papers (2022-11-17T11:04:00Z) - To what extent do human explanations of model behavior align with actual
model behavior? [91.67905128825402]
We investigated the extent to which human-generated explanations of models' inference decisions align with how models actually make these decisions.
We defined two alignment metrics that quantify how well natural language human explanations align with model sensitivity to input words.
We find that a model's alignment with human explanations is not predicted by the model's accuracy on NLI.
arXiv Detail & Related papers (2020-12-24T17:40:06Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.