Understanding the Limitations of Network Online Learning
- URL: http://arxiv.org/abs/2001.07607v1
- Date: Thu, 9 Jan 2020 13:59:20 GMT
- Title: Understanding the Limitations of Network Online Learning
- Authors: Timothy LaRock, Timothy Sakharov, Sahely Bhadra, Tina Eliassi-Rad
- Abstract summary: We investigate limitations of learning to complete partially observed networks via node querying.
We call this querying process Network Online Learning and present a family of algorithms called NOL*.
- Score: 5.925292989496618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studies of networked phenomena, such as interactions in online social media,
often rely on incomplete data, either because these phenomena are partially
observed, or because the data is too large or expensive to acquire all at once.
Analysis of incomplete data leads to skewed or misleading results. In this
paper, we investigate limitations of learning to complete partially observed
networks via node querying. Concretely, we study the following problem: given
(i) a partially observed network, (ii) the ability to query nodes for their
connections (e.g., by accessing an API), and (iii) a budget on the number of
such queries, sequentially learn which nodes to query in order to maximally
increase observability. We call this querying process Network Online Learning
and present a family of algorithms called NOL*. These algorithms learn to
choose which partially observed node to query next based on a parameterized
model that is trained online through a process of exploration and exploitation.
Extensive experiments on both synthetic and real world networks show that (i)
it is possible to sequentially learn to choose which nodes are best to query in
a network and (ii) some macroscopic properties of networks, such as the degree
distribution and modular structure, impact the potential for learning and the
optimal amount of random exploration.
Related papers
- Unsupervised Learning via Network-Aware Embeddings [0.0]
We show how to create network-aware embeddings by estimating the network distance between numeric node attributes.
Our method is fully open source and data and code are available to reproduce all results in the paper.
arXiv Detail & Related papers (2023-09-19T08:17:48Z) - Gossiped and Quantized Online Multi-Kernel Learning [39.057968279167966]
We show that distributed and online multi- kernel learning provides sub-linear regret as long as every pair of nodes in the network can communicate.
This letter expands on these results to non-fully connected graphs, which is often the case in wireless sensor networks.
We propose a gossip algorithm and provide a proof that it achieves sub-linear regret.
arXiv Detail & Related papers (2023-01-24T07:12:40Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Network Support for High-performance Distributed Machine Learning [17.919773898228716]
We propose a system model that captures both learning nodes (that perform computations) and information nodes (that provide data)
We then formulate the problem of selecting (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform.
We devise an algorithm, named DoubleClimb, that can find a 1+1/|I|-competitive solution with cubic worst-case complexity.
arXiv Detail & Related papers (2021-02-05T19:38:57Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Graph Prototypical Networks for Few-shot Learning on Attributed Networks [72.31180045017835]
We propose a graph meta-learning framework -- Graph Prototypical Networks (GPN)
GPN is able to perform textitmeta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
arXiv Detail & Related papers (2020-06-23T04:13:23Z) - Temporal Network Representation Learning via Historical Neighborhoods
Aggregation [28.397309507168128]
We propose the Embedding via Historical Neighborhoods Aggregation (EHNA) algorithm.
We first propose a temporal random walk that can identify relevant nodes in historical neighborhoods.
Then we apply a deep learning model which uses a custom attention mechanism to induce node embeddings.
arXiv Detail & Related papers (2020-03-30T04:18:48Z) - Inference for Network Structure and Dynamics from Time Series Data via
Graph Neural Network [21.047133113979083]
We propose a novel data-driven deep learning model called Gumbel Graph Network (GGN) to solve the two kinds of network inference problems: Network Reconstruction and Network Completion.
Our method can reconstruct up to 100% network structure on the network reconstruction task.
While the model can also infer the unknown parts of the structure with up to 90% accuracy when some nodes are missing.
arXiv Detail & Related papers (2020-01-18T02:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.