Empirically Classifying Network Mechanisms
- URL: http://arxiv.org/abs/2012.15863v2
- Date: Mon, 4 Jan 2021 16:57:25 GMT
- Title: Empirically Classifying Network Mechanisms
- Authors: Ryan E. Langendorf and Matthew G. Burgess
- Abstract summary: Network models are used to study interconnected systems across many physical, biological, and social disciplines.
We introduce a simple empirical approach which can mechanistically classify arbitrary network data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network models are used to study interconnected systems across many physical,
biological, and social disciplines. Such models often assume a particular
network-generating mechanism, which when fit to data produces estimates of
mechanism-specific parameters that describe how systems function. For instance,
a social network model might assume new individuals connect to others with
probability proportional to their number of pre-existing connections
('preferential attachment'), and then estimate the disparity in interactions
between famous and obscure individuals with similar qualifications. However,
without a means of testing the relevance of the assumed mechanism, conclusions
from such models could be misleading. Here we introduce a simple empirical
approach which can mechanistically classify arbitrary network data. Our
approach compares empirical networks to model networks from a user-provided
candidate set of mechanisms, and classifies each network--with high
accuracy--as originating from either one of the mechanisms or none of them. We
tested 373 empirical networks against five of the most widely studied network
mechanisms and found that most (228) were unlike any of these mechanisms. This
raises the possibility that some empirical networks arise from mixtures of
mechanisms. We show that mixtures are often unidentifiable because different
mixtures can produce functionally equivalent networks. In such systems, which
are governed by multiple mechanisms, our approach can still accurately predict
out-of-sample functional properties.
Related papers
- Network Causal Effect Estimation In Graphical Models Of Contagion And Latent Confounding [2.654975444537834]
Key question in many network studies is whether the observed correlations between units are primarily due to contagion or latent confounding.
We propose network causal effect estimation strategies that provide unbiased and consistent estimates.
We evaluate the effectiveness of our methods with synthetic data and the validity of our assumptions using real-world networks.
arXiv Detail & Related papers (2024-11-02T22:12:44Z) - Leveraging advances in machine learning for the robust classification and interpretation of networks [0.0]
Simulation approaches involve selecting a suitable network generative model such as Erd"os-R'enyi or small-world.
We utilize advances in interpretable machine learning to classify simulated networks by our generative models based on various network attributes.
arXiv Detail & Related papers (2024-03-20T00:24:23Z) - Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals [82.68757839524677]
Interpretability research aims to bridge the gap between empirical success and our scientific understanding of large language models (LLMs)
We propose a formulation of competition of mechanisms, which focuses on the interplay of multiple mechanisms instead of individual mechanisms.
Our findings show traces of the mechanisms and their competition across various model components and reveal attention positions that effectively control the strength of certain mechanisms.
arXiv Detail & Related papers (2024-02-18T17:26:51Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Fitting Low-rank Models on Egocentrically Sampled Partial Networks [4.111899441919165]
We propose an approach to fit general low-rank models for egocentrically sampled networks.
This method offers the first theoretical guarantee for egocentric partial network estimation.
We evaluate the technique on several synthetic and real-world networks and show that it delivers competitive performance in link prediction tasks.
arXiv Detail & Related papers (2023-03-09T03:20:44Z) - Mechanism of feature learning in deep fully connected networks and
kernel machines that recursively learn features [15.29093374895364]
We identify and characterize the mechanism through which deep fully connected neural networks learn gradient features.
Our ansatz sheds light on various deep learning phenomena including emergence of spurious features and simplicity biases.
To demonstrate the effectiveness of this feature learning mechanism, we use it to enable feature learning in classical, non-feature learning models.
arXiv Detail & Related papers (2022-12-28T15:50:58Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.