Multipopulation mortality modelling and forecasting: The multivariate
functional principal component with time weightings approaches
- URL: http://arxiv.org/abs/2102.09612v1
- Date: Thu, 18 Feb 2021 21:01:58 GMT
- Title: Multipopulation mortality modelling and forecasting: The multivariate
functional principal component with time weightings approaches
- Authors: Ka Kin Lam, Bo Wang
- Abstract summary: We introduce two new models for joint mortality modelling and forecasting multiple subpopulations.
The first proposed model extends the independent functional data model to a multi-population modelling setting.
The second proposed model outperforms the first model as well as the current models in terms of forecast accuracy.
- Score: 3.450774887322348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human mortality patterns and trajectories in closely related populations are
likely linked together and share similarities. It is always desirable to model
them simultaneously while taking their heterogeneity into account. This paper
introduces two new models for joint mortality modelling and forecasting
multiple subpopulations in adaptations of the multivariate functional principal
component analysis techniques. The first model extends the independent
functional data model to a multi-population modelling setting. In the second
one, we propose a novel multivariate functional principal component method for
coherent modelling. Its design primarily fulfils the idea that when several
subpopulation groups have similar socio-economic conditions or common
biological characteristics, such close connections are expected to evolve in a
non-diverging fashion. We demonstrate the proposed methods by using
sex-specific mortality data. Their forecast performances are further compared
with several existing models, including the independent functional data model
and the Product-Ratio model, through comparisons with mortality data of ten
developed countries. Our experiment results show that the first proposed model
maintains a comparable forecast ability with the existing methods. In contrast,
the second proposed model outperforms the first model as well as the current
models in terms of forecast accuracy, in addition to several desirable
properties.
Related papers
- Exploring Model Kinship for Merging Large Language Models [52.01652098827454]
We introduce model kinship, the degree of similarity or relatedness between Large Language Models.
We find that there is a certain relationship between model kinship and the performance gains after model merging.
We propose a new model merging strategy: Top-k Greedy Merging with Model Kinship, which can yield better performance on benchmark datasets.
arXiv Detail & Related papers (2024-10-16T14:29:29Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Embedding-based statistical inference on generative models [10.948308354932639]
We extend results related to embedding-based representations of generative models to classical statistical inference settings.
We demonstrate that using the perspective space as the basis of a notion of "similar" is effective for multiple model-level inference tasks.
arXiv Detail & Related papers (2024-10-01T22:28:39Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Diversity vs. Recognizability: Human-like generalization in one-shot
generative models [5.964436882344729]
We propose a new framework to evaluate one-shot generative models along two axes: sample recognizability vs. diversity.
We first show that GAN-like and VAE-like models fall on opposite ends of the diversity-recognizability space.
In contrast, disentanglement transports the model along a parabolic curve that could be used to maximize recognizability.
arXiv Detail & Related papers (2022-05-20T13:17:08Z) - Model Compression for Domain Adaptation through Causal Effect Estimation [20.842938440720303]
ATE-guided Model Compression scheme (AMoC) generates many model candidates, differing by the model components that were removed.
Then, we select the best candidate through a stepwise regression model that utilizes the ATE to predict the expected performance on the target domain.
AMoC outperforms strong baselines on 46 of 60 domain pairs across two text classification tasks, with an average improvement of more than 3% in F1 above the strongest baseline.
arXiv Detail & Related papers (2021-01-18T14:18:02Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Probability Link Models with Symmetric Information Divergence [1.5749416770494706]
Two general classes of link models are proposed.
The first model links two survival functions and is applicable to models such as the proportional odds and change point.
The second model links two cumulative probability distribution functions.
arXiv Detail & Related papers (2020-08-10T19:49:51Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z) - A General Framework for Survival Analysis and Multi-State Modelling [70.31153478610229]
We use neural ordinary differential equations as a flexible and general method for estimating multi-state survival models.
We show that our model exhibits state-of-the-art performance on popular survival data sets and demonstrate its efficacy in a multi-state setting.
arXiv Detail & Related papers (2020-06-08T19:24:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.