Dynamic Mixed Membership Stochastic Block Model for Weighted Labeled
Networks
- URL: http://arxiv.org/abs/2304.05894v1
- Date: Wed, 12 Apr 2023 15:01:03 GMT
- Title: Dynamic Mixed Membership Stochastic Block Model for Weighted Labeled
Networks
- Authors: Ga\"el Poux-M\'edard, Julien Velcin, Sabine Loudcher
- Abstract summary: A new family of Mixed Membership Block Models (MMSBM) allows to model static labeled networks under the assumption of mixed-membership clustering.
We show that our method significantly differs from existing approaches, and allows to model more complex systems --dynamic labeled networks.
- Score: 3.5450828190071655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most real-world networks evolve over time. Existing literature proposes
models for dynamic networks that are either unlabeled or assumed to have a
single membership structure. On the other hand, a new family of Mixed
Membership Stochastic Block Models (MMSBM) allows to model static labeled
networks under the assumption of mixed-membership clustering. In this work, we
propose to extend this later class of models to infer dynamic labeled networks
under a mixed membership assumption. Our approach takes the form of a temporal
prior on the model's parameters. It relies on the single assumption that
dynamics are not abrupt. We show that our method significantly differs from
existing approaches, and allows to model more complex systems --dynamic labeled
networks. We demonstrate the robustness of our method with several experiments
on both synthetic and real-world datasets. A key interest of our approach is
that it needs very few training data to yield good results. The performance
gain under challenging conditions broadens the variety of possible applications
of automated learning tools --as in social sciences, which comprise many fields
where small datasets are a major obstacle to the introduction of machine
learning methods.
Related papers
- Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models [9.318262213262866]
We introduce a novel framework for learning semi-structured dynamics models for contact-rich systems.
We make accurate long-horizon predictions with substantially less data than prior methods.
We validate our approach on a real-world Unitree Go1 quadruped robot.
arXiv Detail & Related papers (2024-10-11T18:11:21Z) - Learning the mechanisms of network growth [42.1340910148224]
We propose a novel model-selection method for dynamic networks.
Data is generated by simulating nine state-of-the-art random graph models.
Proposed features are easy to compute, analytically tractable, and interpretable.
arXiv Detail & Related papers (2024-03-31T20:38:59Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear
Modulation [69.34011200590817]
We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation.
By modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity.
We show that FiLM-Ensemble outperforms other implicit ensemble methods, and it comes very close to the upper bound of an explicit ensemble of networks.
arXiv Detail & Related papers (2022-05-31T18:33:15Z) - Symplectic Momentum Neural Networks -- Using Discrete Variational
Mechanics as a prior in Deep Learning [7.090165638014331]
This paper introduces Sympic Momentum Networks (SyMo) as models from a discrete formulation of mechanics for non-separable mechanical systems.
We show that such combination not only allows these models tol earn from limited data but also provides the models with the capability of preserving the symplectic form and show better long-term behaviour.
arXiv Detail & Related papers (2022-01-20T16:33:19Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - The Role of Isomorphism Classes in Multi-Relational Datasets [6.419762264544509]
We show that isomorphism leakage overestimates performance in multi-relational inference.
We propose isomorphism-aware synthetic benchmarks for model evaluation.
We also demonstrate that isomorphism classes can be utilised through a simple prioritisation scheme.
arXiv Detail & Related papers (2020-09-30T12:15:24Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.