Higher-order accurate two-sample network inference and network hashing
- URL: http://arxiv.org/abs/2208.07573v3
- Date: Fri, 2 Feb 2024 15:04:36 GMT
- Title: Higher-order accurate two-sample network inference and network hashing
- Authors: Meijia Shao, Dong Xia, Yuan Zhang, Qiong Wu and Shuo Chen
- Abstract summary: Two-sample hypothesis testing for network comparison presents many significant challenges.
We develop a comprehensive toolbox featuring a novel main method and its variants.
Our method outperforms existing tools in speed and accuracy, and it is proved power-optimal.
- Score: 13.984114642035692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two-sample hypothesis testing for network comparison presents many
significant challenges, including: leveraging repeated network observations and
known node registration, but without requiring them to operate; relaxing strong
structural assumptions; achieving finite-sample higher-order accuracy; handling
different network sizes and sparsity levels; fast computation and memory
parsimony; controlling false discovery rate (FDR) in multiple testing; and
theoretical understandings, particularly regarding finite-sample accuracy and
minimax optimality. In this paper, we develop a comprehensive toolbox,
featuring a novel main method and its variants, all accompanied by strong
theoretical guarantees, to address these challenges. Our method outperforms
existing tools in speed and accuracy, and it is proved power-optimal. Our
algorithms are user-friendly and versatile in handling various data structures
(single or repeated network observations; known or unknown node registration).
We also develop an innovative framework for offline hashing and fast querying
as a very useful tool for large network databases. We showcase the
effectiveness of our method through comprehensive simulations and applications
to two real-world datasets, which revealed intriguing new structures.
Related papers
- Sifting out communities in large sparse networks [2.666294200266662]
We introduce an intuitive objective function for quantifying the quality of clustering results in large sparse networks.
We utilize a two-step method for identifying communities which is especially well-suited for this domain.
We identify complex genetic interactions in large-scale networks comprised of tens of thousands of nodes.
arXiv Detail & Related papers (2024-05-01T18:57:41Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - IQNAS: Interpretable Integer Quadratic Programming Neural Architecture
Search [40.77061519007659]
A popular approach to find fitting networks is through constrained Neural Architecture Search (NAS)
Previous methods use complicated predictors for the accuracy of the network.
We introduce Interpretable Quadratic programming Neural Architecture Search (IQNAS)
arXiv Detail & Related papers (2021-10-24T09:45:00Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Characterization and Identification of Cloudified Mobile Network
Performance Bottlenecks [0.0]
This study is a first attempt to experimentally explore the range of performance bottlenecks that 5G mobile networks can experience.
In particular, we find that distributed analytics performs reasonably well both in terms of bottleneck identification accuracy and incurred computational and communication overhead.
arXiv Detail & Related papers (2020-07-22T14:46:51Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.