Learning to Learn the Macroscopic Fundamental Diagram using Physics-Informed and meta Machine Learning techniques
- URL: http://arxiv.org/abs/2508.14137v1
- Date: Tue, 19 Aug 2025 12:59:58 GMT
- Title: Learning to Learn the Macroscopic Fundamental Diagram using Physics-Informed and meta Machine Learning techniques
- Authors: Amalie Roark, Serio Agriesti, Francisco Camara Pereira, Guido Cantelmo,
- Abstract summary: This article proposes a framework that trains models to understand and adapt to new tasks on their own, to alleviate the data scarcity challenge.<n>The developed model is trained and tested by leveraging data from multiple cities and exploiting it to model the MFD of other cities with different shares of detectors and topological structures.<n>Results show an average MSE improvement in flow prediction ranging between 17500 and 36000 depending on the subset of loop detectors tested.
- Score: 0.09999629695552192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Macroscopic Fundamental Diagram is a popular tool used to describe traffic dynamics in an aggregated way, with applications ranging from traffic control to incident analysis. However, estimating the MFD for a given network requires large numbers of loop detectors, which is not always available in practice. This article proposes a framework harnessing meta-learning, a subcategory of machine learning that trains models to understand and adapt to new tasks on their own, to alleviate the data scarcity challenge. The developed model is trained and tested by leveraging data from multiple cities and exploiting it to model the MFD of other cities with different shares of detectors and topological structures. The proposed meta-learning framework is applied to an ad-hoc Multi-Task Physics-Informed Neural Network, specifically designed to estimate the MFD. Results show an average MSE improvement in flow prediction ranging between ~ 17500 and 36000 (depending on the subset of loop detectors tested). The meta-learning framework thus successfully generalizes across diverse urban settings and improves performance on cities with limited data, demonstrating the potential of using meta-learning when a limited number of detectors is available. Finally, the proposed framework is validated against traditional transfer learning approaches and tested with FitFun, a non-parametric model from the literature, to prove its transferability.
Related papers
- Co-Training Vision Language Models for Remote Sensing Multi-task Learning [68.15604397741753]
Vision language models (VLMs) have achieved promising results in RS image understanding, grounding, and ultra-high-resolution (UHR) image reasoning.<n>We present RSCoVLM, a simple yet flexible VLM baseline for RS MTL.<n>We propose a unified dynamic-resolution strategy to address the diverse image scales inherent in RS imagery.
arXiv Detail & Related papers (2025-11-26T10:55:07Z) - Meta-UAD: A Meta-Learning Scheme for User-level Network Traffic Anomaly Detection [15.038762892493219]
We propose textitMeta-UAD, a Meta-learning scheme for User-level network traffic Anomaly Detection.<n>We use the CICFlowMeter to extract 81 flow-level statistical features and remove some invalid ones.<n>Compared with existing models, the results further demonstrate the superiority of Meta-UAD with 15% - 43% gains in F1-score.
arXiv Detail & Related papers (2024-08-30T06:05:15Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.<n>To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.<n>We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Latent Task-Specific Graph Network Simulators [16.881339139068018]
Graph Network Simulators (GNSs) pose an efficient alternative to traditional physics-based simulators.
We frame mesh-based simulation as a meta-learning problem and use a recent Bayesian meta-learning method to improve GNSs adaptability to new scenarios.
We validate the effectiveness of our approach through various experiments, performing on par with or better than established baseline methods.
arXiv Detail & Related papers (2023-11-09T10:30:51Z) - Unsupervised Representation Learning to Aid Semi-Supervised Meta
Learning [16.534014215010757]
We propose a one-shot unsupervised meta-learning to learn latent representation of training samples.
A temperature-scaled cross-entropy loss is used in the inner loop of meta-learning to prevent overfitting.
The proposed method is model agnostic and can aid any meta-learning model to improve accuracy.
arXiv Detail & Related papers (2023-10-19T18:25:22Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - A transfer learning enhanced the physics-informed neural network model
for vortex-induced vibration [0.0]
This paper proposed a transfer learning enhanced the physics-informed neural network (PINN) model to study the VIV (2D)
The physics-informed neural network, when used in conjunction with the transfer learning method, enhances learning efficiency and keeps predictability in the target task by common characteristics knowledge from the source model without requiring a huge quantity of datasets.
arXiv Detail & Related papers (2021-12-29T08:20:23Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.