Influence Estimation and Maximization via Neural Mean-Field Dynamics
- URL: http://arxiv.org/abs/2106.02608v1
- Date: Thu, 3 Jun 2021 00:02:05 GMT
- Title: Influence Estimation and Maximization via Neural Mean-Field Dynamics
- Authors: Shushan He, Hongyuan Zha and Xiaojing Ye
- Abstract summary: We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
- Score: 60.91291234832546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel learning framework using neural mean-field (NMF) dynamics
for inference and estimation problems on heterogeneous diffusion networks. Our
new framework leverages the Mori-Zwanzig formalism to obtain an exact evolution
equation of the individual node infection probabilities, which renders a delay
differential equation with memory integral approximated by learnable time
convolution operators. Directly using information diffusion cascade data, our
framework can simultaneously learn the structure of the diffusion network and
the evolution of node infection probabilities. Connections between parameter
learning and optimal control are also established, leading to a rigorous and
implementable algorithm for training NMF. Moreover, we show that the projected
gradient descent method can be employed to solve the challenging influence
maximization problem, where the gradient is computed extremely fast by
integrating NMF forward in time just once in each iteration. Extensive
empirical studies show that our approach is versatile and robust to variations
of the underlying diffusion network models, and significantly outperform
existing approaches in accuracy and efficiency on both synthetic and real-world
data.
Related papers
- Back to Bayesics: Uncovering Human Mobility Distributions and Anomalies with an Integrated Statistical and Neural Framework [14.899157568336731]
DeepBayesic is a novel framework that integrates Bayesian principles with deep neural networks to model the underlying distributions.
We evaluate our approach on several mobility datasets, demonstrating significant improvements over state-of-the-art anomaly detection methods.
arXiv Detail & Related papers (2024-10-01T19:02:06Z) - Neural Flow Diffusion Models: Learnable Forward Process for Improved Diffusion Modelling [2.1779479916071067]
We introduce a novel framework that enhances diffusion models by supporting a broader range of forward processes.
We also propose a novel parameterization technique for learning the forward process.
Results underscore NFDM's versatility and its potential for a wide range of applications.
arXiv Detail & Related papers (2024-04-19T15:10:54Z) - Neural Network with Local Converging Input (NNLCI) for Supersonic Flow
Problems with Unstructured Grids [0.9152133607343995]
We develop a neural network with local converging input (NNLCI) for high-fidelity prediction using unstructured data.
As a validation case, the NNLCI method is applied to study inviscid supersonic flows in channels with bumps.
arXiv Detail & Related papers (2023-10-23T19:03:37Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Robust Learning via Ensemble Density Propagation in Deep Neural Networks [6.0122901245834015]
We formulate the problem of density propagation through layers of a deep neural network (DNN) and solve it using an Ensemble Density propagation scheme.
Experiments using MNIST and CIFAR-10 datasets show a significant improvement in the robustness of the trained models to random noise and adversarial attacks.
arXiv Detail & Related papers (2021-11-10T21:26:08Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.