PrEF: Percolation-based Evolutionary Framework for the
diffusion-source-localization problem in large networks
- URL: http://arxiv.org/abs/2205.07422v2
- Date: Thu, 19 May 2022 03:33:51 GMT
- Title: PrEF: Percolation-based Evolutionary Framework for the
diffusion-source-localization problem in large networks
- Authors: Yang Liu, Xiaoqi Wang, Xi Wang, Zhen Wang, J\"urgen Kurths
- Abstract summary: We formulate a candidate set which contains the diffusion source for sure, and propose the method, Percolation-based Evolutionary Framework (PrEF) to minimize such set.
PrEF is developed based on the network percolation and evolutionary algorithm.
Our results show that the developed approach could achieve a much smaller candidate set compared to the state of the art in almost all cases.
- Score: 11.998014357342333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We assume that the state of a number of nodes in a network could be
investigated if necessary, and study what configuration of those nodes could
facilitate a better solution for the diffusion-source-localization (DSL)
problem. In particular, we formulate a candidate set which contains the
diffusion source for sure, and propose the method, Percolation-based
Evolutionary Framework (PrEF), to minimize such set. Hence one could further
conduct more intensive investigation on only a few nodes to target the source.
To achieve that, we first demonstrate that there are some similarities between
the DSL problem and the network immunization problem. We find that the
minimization of the candidate set is equivalent to the minimization of the
order parameter if we view the observer set as the removal node set. Hence,
PrEF is developed based on the network percolation and evolutionary algorithm.
The effectiveness of the proposed method is validated on both model and
empirical networks in regard to varied circumstances. Our results show that the
developed approach could achieve a much smaller candidate set compared to the
state of the art in almost all cases. Meanwhile, our approach is also more
stable, i.e., it has similar performance irrespective of varied infection
probabilities, diffusion models, and outbreak ranges. More importantly, our
approach might provide a new framework to tackle the DSL problem in extreme
large networks.
Related papers
- DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - PDE+: Enhancing Generalization via PDE with Adaptive Distributional
Diffusion [66.95761172711073]
generalization of neural networks is a central challenge in machine learning.
We propose to enhance it directly through the underlying function of neural networks, rather than focusing on adjusting input data.
We put this theoretical framework into practice as $textbfPDE+$ ($textbfPDE$ with $textbfA$daptive $textbfD$istributional $textbfD$iffusion)
arXiv Detail & Related papers (2023-05-25T08:23:26Z) - Finding Nontrivial Minimum Fixed Points in Discrete Dynamical Systems [29.7237944669855]
We formulate a novel optimization problem of finding a nontrivial fixed point of the system with the minimum number of affected nodes.
To cope with this computational intractability, we identify several special cases for which the problem can be solved efficiently.
For solving the problem on larger networks, we propose a general framework along with greedy selection methods.
arXiv Detail & Related papers (2023-01-06T14:46:01Z) - Provably Efficient Reinforcement Learning for Online Adaptive Influence
Maximization [53.11458949694947]
We consider an adaptive version of content-dependent online influence problem where seed nodes are sequentially activated based on realtime feedback.
Our algorithm maintains a network model estimate and selects seed adaptively, exploring the social network while improving the optimal policy optimistically.
arXiv Detail & Related papers (2022-06-29T18:17:28Z) - Network Inference and Influence Maximization from Samples [20.916163957596577]
We study the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds.
We provide a novel solution to the cascade to the network inference problem, that is, learning diffusion parameters and the network structure from the data.
Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn.
arXiv Detail & Related papers (2021-06-07T08:06:36Z) - Effective Version Space Reduction for Convolutional Neural Networks [61.84773892603885]
In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis.
We examine active learning with convolutional neural networks through the principled lens of version space reduction.
arXiv Detail & Related papers (2020-06-22T17:40:03Z) - Large Scale Many-Objective Optimization Driven by Distributional
Adversarial Networks [1.2461503242570644]
We will propose a novel algorithm based on RVEA framework and using Distributional Adversarial Networks (DAN) to generate new offspring.
The propose new algorithm will be tested on 9 benchmark problems in Large scale multi-objective problems (LSMOP)
arXiv Detail & Related papers (2020-03-16T04:14:15Z) - Joint Wasserstein Distribution Matching [89.86721884036021]
Joint distribution matching (JDM) problem, which aims to learn bidirectional mappings to match joint distributions of two domains, occurs in many machine learning and computer vision applications.
We propose to address JDM problem by minimizing the Wasserstein distance of the joint distributions in two domains.
We then propose an important theorem to reduce the intractable problem into a simple optimization problem, and develop a novel method to solve it.
arXiv Detail & Related papers (2020-03-01T03:39:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.