Learning hidden cascades via classification
- URL: http://arxiv.org/abs/2505.11228v2
- Date: Thu, 12 Jun 2025 13:37:57 GMT
- Title: Learning hidden cascades via classification
- Authors: Derrick Gilchrist Edward Manoharan, Anubha Goel, Alexandros Iosifidis, Henri Hansen, Juho Kanniainen,
- Abstract summary: We propose a partial observability-aware Machine Learning framework to learn the characteristics of the spreading model.<n>We evaluate our method on two types of synthetic networks and extend the study to a real-world insider trading network.
- Score: 64.51931908932421
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spreading dynamics in social networks are often studied under the assumption that individuals' statuses, whether informed or infected, are fully observable. However, in many real-world situations, such statuses remain unobservable, which is crucial for determining an individual's potential to further spread the infection. While this final status is hidden, intermediate indicators such as symptoms of infection are observable and provide important insights into the spread process. We propose a partial observability-aware Machine Learning framework to learn the characteristics of the spreading model. We term the method Distribution Classification, which utilizes the power of classifiers to infer the underlying transmission dynamics. We evaluate our method on two types of synthetic networks and extend the study to a real-world insider trading network. Results show that the method performs well, especially on complex networks with high cyclic connectivity, supporting its utility in analyzing real-world spreading phenomena where direct observation of individual statuses is not possible.
Related papers
- Distinguishing mechanisms of social contagion from local network view [0.02499907423888048]
Multiple adoption rules may coexist even within the same social contagion process.
Our goal is to understand whether coexisting adoption mechanisms can be distinguished from a microscopic view.
This study offers a novel perspective on the observations of propagation processes at the egocentric level.
arXiv Detail & Related papers (2024-06-26T17:49:24Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Universal Scaling Laws of Absorbing Phase Transitions in Artificial Deep Neural Networks [0.8932296777085644]
Conventional artificial deep neural networks operating near the phase boundary of the signal propagation dynamics, also known as the edge of chaos, exhibit universal scaling laws of absorbing phase transitions.<n>We exploit the fully deterministic nature of the propagation dynamics to elucidate an analogy between a signal collapse in the neural networks and an absorbing state.
arXiv Detail & Related papers (2023-07-05T13:39:02Z) - ShapeWordNet: An Interpretable Shapelet Neural Network for Physiological
Signal Classification [16.82411861562806]
We propose a more effective and interpretable scheme tailored for the physiological signal classification task.
We exploit the time series shapelets to extract prominent local patterns and perform interpretable sequence discretization.
We name our method as ShapeWordNet and conduct extensive experiments on three real-world datasets to investigate its effectiveness.
arXiv Detail & Related papers (2023-02-10T02:30:31Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Health Status Prediction with Local-Global Heterogeneous Behavior Graph [69.99431339130105]
Estimation of health status can be achieved with various kinds of data streams continuously collected from wearable sensors.
We propose to model the behavior-related multi-source data streams with a local-global graph.
We take experiments on StudentLife dataset, and extensive results demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-03-23T11:10:04Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Joint Inference of Diffusion and Structure in Partially Observed Social
Networks Using Coupled Matrix Factorization [3.399624105745357]
In this paper, a model is learned from partially observed data to infer unobserved diffusion and structure networks.
The interrelations among links of nodes and cascade processes are utilized in the proposed method via learning and low-dimensional latent factors.
Experiments on these synthetic and real-world datasets show that the proposed method successfully detects invisible social behaviors, predicts links, and identifies latent features.
arXiv Detail & Related papers (2020-10-03T17:48:57Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Relationship between manifold smoothness and adversarial vulnerability
in deep learning with local errors [2.7834038784275403]
We study the origin of the adversarial vulnerability in artificial neural networks.
Our study reveals that a high generalization accuracy requires a relatively fast power-law decay of the eigen-spectrum of hidden representations.
arXiv Detail & Related papers (2020-07-04T08:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.