Semi-supervised Network Embedding with Differentiable Deep Quantisation
- URL: http://arxiv.org/abs/2108.09128v1
- Date: Fri, 20 Aug 2021 11:53:05 GMT
- Title: Semi-supervised Network Embedding with Differentiable Deep Quantisation
- Authors: Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
- Abstract summary: We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
- Score: 81.49184987430333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning accurate low-dimensional embeddings for a network is a crucial task
as it facilitates many downstream network analytics tasks. For large networks,
the trained embeddings often require a significant amount of space to store,
making storage and processing a challenge. Building on our previous work on
semi-supervised network embedding, we develop d-SNEQ, a differentiable
DNN-based quantisation method for network embedding. d-SNEQ incorporates a rank
loss to equip the learned quantisation codes with rich high-order information
and is able to substantially compress the size of trained embeddings, thus
reducing storage footprint and accelerating retrieval speed. We also propose a
new evaluation metric, path prediction, to fairly and more directly evaluate
model performance on the preservation of high-order information. Our evaluation
on four real-world networks of diverse characteristics shows that d-SNEQ
outperforms a number of state-of-the-art embedding methods in link prediction,
path prediction, node classification, and node recommendation while being far
more space- and time-efficient.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Opening the Black Box: predicting the trainability of deep neural networks with reconstruction entropy [0.0]
We present a method for predicting the trainable regime in parameter space for deep feedforward neural networks.
For both the MNIST and CIFAR10 datasets, we show that a single epoch of training is sufficient to predict the trainability of the deep feedforward network.
arXiv Detail & Related papers (2024-06-13T18:00:05Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Network Embedding via Deep Prediction Model [25.727377978617465]
This paper proposes a network embedding framework to capture the transfer behaviors on structured networks via deep prediction models.
A network structure embedding layer is added into conventional deep prediction models, including Long Short-Term Memory Network and Recurrent Neural Network.
Experimental studies are conducted on various datasets including social networks, citation networks, biomedical network, collaboration network and language network.
arXiv Detail & Related papers (2021-04-27T16:56:00Z) - Towards Accurate Quantization and Pruning via Data-free Knowledge
Transfer [61.85316480370141]
We study data-free quantization and pruning by transferring knowledge from trained large networks to compact networks.
Our data-free compact networks achieve competitive accuracy to networks trained and fine-tuned with training data.
arXiv Detail & Related papers (2020-10-14T18:02:55Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - Compact Neural Representation Using Attentive Network Pruning [1.0152838128195465]
We describe a Top-Down attention mechanism that is added to a Bottom-Up feedforward network to select important connections and subsequently prune redundant ones at all parametric layers.
Our method not only introduces a novel hierarchical selection mechanism as the basis of pruning but also remains competitive with previous baseline methods in the experimental evaluation.
arXiv Detail & Related papers (2020-05-10T03:20:01Z) - Depth Enables Long-Term Memory for Recurrent Neural Networks [0.0]
We introduce a measure of the network's ability to support information flow across time, referred to as the Start-End separation rank.
We prove that deep recurrent networks support Start-End separation ranks which are higher than those supported by their shallow counterparts.
arXiv Detail & Related papers (2020-03-23T10:29:14Z) - EPINE: Enhanced Proximity Information Network Embedding [2.257737378757467]
In this work, we devote to mining valuable information in adjacency matrices at a deeper level.
Under the same objective, many NE methods calculate high-order proximity by the powers of adjacency matrices.
We propose to redefine high-order proximity in a more intuitive manner.
arXiv Detail & Related papers (2020-03-04T15:57:17Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.