Likelihood-Based Methods Improve Parameter Estimation in Opinion
Dynamics Models
- URL: http://arxiv.org/abs/2310.02766v2
- Date: Thu, 5 Oct 2023 09:22:03 GMT
- Title: Likelihood-Based Methods Improve Parameter Estimation in Opinion
Dynamics Models
- Authors: Jacopo Lenti, Corrado Monti, Gianmarco De Francisci Morales
- Abstract summary: We show that a maximum likelihood approach for parameter estimation in agent-based models (ABMs) of opinion dynamics outperforms the typical simulation-based approach.
In contrast, likelihood-based approaches derive a likelihood function that connects the unknown parameters to the observed data in a statistically principled way.
Our experimental results show that the maximum likelihood estimates are up to 4x more accurate and require up to 200x less computational time.
- Score: 6.138671548064356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that a maximum likelihood approach for parameter estimation in
agent-based models (ABMs) of opinion dynamics outperforms the typical
simulation-based approach. Simulation-based approaches simulate the model
repeatedly in search of a set of parameters that generates data similar enough
to the observed one. In contrast, likelihood-based approaches derive a
likelihood function that connects the unknown parameters to the observed data
in a statistically principled way. We compare these two approaches on the
well-known bounded-confidence model of opinion dynamics. We do so on three
realistic scenarios of increasing complexity depending on data availability:
(i) fully observed opinions and interactions, (ii) partially observed
interactions, (iii) observed interactions with noisy proxies of the opinions.
We highlight how identifying observed and latent variables is fundamental for
connecting the model to the data. To realize the likelihood-based approach, we
first cast the model into a probabilistic generative guise that supports a
proper data likelihood. Then, we describe the three scenarios via probabilistic
graphical models and show the nuances that go into translating the model.
Finally, we implement the resulting probabilistic models in an automatic
differentiation framework (PyTorch). This step enables easy and efficient
maximum likelihood estimation via gradient descent. Our experimental results
show that the maximum likelihood estimates are up to 4x more accurate and
require up to 200x less computational time.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Nonparametric likelihood-free inference with Jensen-Shannon divergence
for simulator-based models with categorical output [1.4298334143083322]
Likelihood-free inference for simulator-based statistical models has attracted a surge of interest, both in the machine learning and statistics communities.
Here we derive a set of theoretical results to enable estimation, hypothesis testing and construction of confidence intervals for model parameters using computation properties of the Jensen-Shannon- divergence.
Such approximation offers a rapid alternative to more-intensive approaches and can be attractive for diverse applications of simulator-based models.
arXiv Detail & Related papers (2022-05-22T18:00:13Z) - Functional Mixtures-of-Experts [0.24578723416255746]
We consider the statistical analysis of heterogeneous data for prediction in situations where the observations include functions.
We first present a new family of ME models, named functional ME (FME) in which the predictors are potentially noisy observations.
We develop dedicated expectation--maximization algorithms for Lasso-like (EM-Lasso) regularized maximum-likelihood parameter estimation strategies to fit the models.
arXiv Detail & Related papers (2022-02-04T17:32:28Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - PSD Representations for Effective Probability Models [117.35298398434628]
We show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end.
We characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees.
Our results open the way to applications of PSD models to density estimation, decision theory and inference.
arXiv Detail & Related papers (2021-06-30T15:13:39Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Score Matched Conditional Exponential Families for Likelihood-Free
Inference [0.0]
Likelihood-Free Inference (LFI) relies on simulations from the model.
We generate parameter-simulation pairs from the model independently on the observation.
We use Neural Networks whose weights are tuned with Score Matching to learn a conditional exponential family likelihood approximation.
arXiv Detail & Related papers (2020-12-20T11:57:30Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.