Using an interpretable Machine Learning approach to study the drivers of
International Migration
- URL: http://arxiv.org/abs/2006.03560v1
- Date: Fri, 5 Jun 2020 17:13:13 GMT
- Title: Using an interpretable Machine Learning approach to study the drivers of
International Migration
- Authors: Harold Silv\`ere Kiossou, Yannik Schenk, Fr\'ed\'eric Docquier,
Vinasetan Ratheil Houndji, Siegfried Nijssen, Pierre Schaus
- Abstract summary: We propose an artificial neural network (ANN) to model international migration.
We use a technique for interpreting machine learning models to show that one can well study the effects of drivers behind international migration.
- Score: 11.318307426609936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Globally increasing migration pressures call for new modelling approaches in
order to design effective policies. It is important to have not only efficient
models to predict migration flows but also to understand how specific
parameters influence these flows. In this paper, we propose an artificial
neural network (ANN) to model international migration. Moreover, we use a
technique for interpreting machine learning models, namely Partial Dependence
Plots (PDP), to show that one can well study the effects of drivers behind
international migration. We train and evaluate the model on a dataset
containing annual international bilateral migration from $1960$ to $2010$ from
$175$ origin countries to $33$ mainly OECD destinations, along with the main
determinants as identified in the migration literature. The experiments carried
out confirm that: 1) the ANN model is more efficient w.r.t. a traditional
model, and 2) using PDP we are able to gain additional insights on the specific
effects of the migration drivers. This approach provides much more information
than only using the feature importance information used in previous works.
Related papers
- Towards detailed and interpretable hybrid modeling of continental-scale bird migration [9.887133861477231]
We build on a recently developed hybrid model of continental-scale bird migration, which combines a movement model inspired by fluid dynamics with recurrent neural networks.
F FluxRGNN has been shown to successfully predict key migration patterns, but its spatial resolution is constrained by the typically sparse observations obtained from weather radars.
We propose two major modifications that allow for more detailed predictions on any desired tessellation while providing control over the interpretability of model components.
arXiv Detail & Related papers (2024-07-14T15:52:19Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Rethinking Human-like Translation Strategy: Integrating Drift-Diffusion
Model with Large Language Models for Machine Translation [15.333148705267012]
We propose Thinker with the Drift-Diffusion Model to emulate human translators' dynamic decision-making under constrained resources.
We conduct experiments under the high-resource, low-resource, and commonsense translation settings using the WMT22 and CommonMT datasets.
We also perform additional analysis and evaluation on commonsense translation to illustrate the high effectiveness and efficacy of the proposed method.
arXiv Detail & Related papers (2024-02-16T14:00:56Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective [106.92016199403042]
We empirically investigate knowledge transfer from larger to smaller models through a parametric perspective.
We employ sensitivity-based techniques to extract and align knowledge-specific parameters between different large language models.
Our findings highlight the critical factors contributing to the process of parametric knowledge transfer.
arXiv Detail & Related papers (2023-10-17T17:58:34Z) - Masked Path Modeling for Vision-and-Language Navigation [41.7517631477082]
Vision-and-language navigation (VLN) agents are trained to navigate in real-world environments by following natural language instructions.
Previous approaches have attempted to address this issue by introducing additional supervision during training.
We introduce a masked path modeling (MPM) objective, which pretrains an agent using self-collected data for downstream navigation tasks.
arXiv Detail & Related papers (2023-05-23T17:20:20Z) - Evaluating natural language processing models with generalization
metrics that do not need access to any training or testing data [66.11139091362078]
We provide the first model selection results on large pretrained Transformers from Huggingface using generalization metrics.
Despite their niche status, we find that metrics derived from the heavy-tail (HT) perspective are particularly useful in NLP tasks.
arXiv Detail & Related papers (2022-02-06T20:07:35Z) - Mixed-Lingual Pre-training for Cross-lingual Summarization [54.4823498438831]
Cross-lingual Summarization aims at producing a summary in the target language for an article in the source language.
We propose a solution based on mixed-lingual pre-training that leverages both cross-lingual tasks like translation and monolingual tasks like masked language models.
Our model achieves an improvement of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over state-of-the-art results.
arXiv Detail & Related papers (2020-10-18T00:21:53Z) - VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven
Model Interpretability Applied to the Ironmaking Industry [70.10343492784465]
It is necessary to expose to the process engineer, not solely the model predictions, but also their interpretability.
Model-agnostic local interpretability solutions based on LIME have recently emerged to improve the original method.
We present in this paper a novel approach, VAE-LIME, for local interpretability of data-driven models forecasting the temperature of the hot metal produced by a blast furnace.
arXiv Detail & Related papers (2020-07-15T07:07:07Z) - An LSTM approach to Forecast Migration using Google Trends [7.621862131380908]
We replace the linear gravity model with a long short-term memory (LSTM) approach and compare it with two existing approaches.
Our LSTM approach combined with Google Trends data outperforms both these models on various metrics.
arXiv Detail & Related papers (2020-05-20T08:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.