Auto-Encoding Twin-Bottleneck Hashing
- URL: http://arxiv.org/abs/2002.11930v2
- Date: Mon, 16 Mar 2020 09:14:58 GMT
- Title: Auto-Encoding Twin-Bottleneck Hashing
- Authors: Yuming Shen, Jie Qin, Jiaxin Chen, Mengyang Yu, Li Liu, Fan Zhu, Fumin
Shen, Ling Shao
- Abstract summary: This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
- Score: 141.5378966676885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional unsupervised hashing methods usually take advantage of
similarity graphs, which are either pre-computed in the high-dimensional space
or obtained from random anchor points. On the one hand, existing methods
uncouple the procedures of hash function learning and graph construction. On
the other hand, graphs empirically built upon original data could introduce
biased prior knowledge of data relevance, leading to sub-optimal retrieval
performance. In this paper, we tackle the above problems by proposing an
efficient and adaptive code-driven graph, which is updated by decoding in the
context of an auto-encoder. Specifically, we introduce into our framework twin
bottlenecks (i.e., latent variables) that exchange crucial information
collaboratively. One bottleneck (i.e., binary codes) conveys the high-level
intrinsic data structure captured by the code-driven graph to the other (i.e.,
continuous variables for low-level detail information), which in turn
propagates the updated network feedback for the encoder to learn more
discriminative binary codes. The auto-encoding learning objective literally
rewards the code-driven graph to learn an optimal encoder. Moreover, the
proposed model can be simply optimized by gradient descent without violating
the binary constraints. Experiments on benchmarked datasets clearly show the
superiority of our framework over the state-of-the-art hashing methods. Our
source code can be found at https://github.com/ymcidence/TBH.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Supervised Auto-Encoding Twin-Bottleneck Hashing [5.653113092257149]
Auto-encoding Twin-bottleneck Hashing is one such method that dynamically builds the graph.
In this work, we generalize the original model into a supervised deep hashing network by incorporating the label information.
arXiv Detail & Related papers (2023-06-19T18:50:02Z) - Graph-Collaborated Auto-Encoder Hashing for Multi-view Binary Clustering [11.082316688429641]
We propose a hashing algorithm based on auto-encoders for multi-view binary clustering.
Specifically, we propose a multi-view affinity graphs learning model with low-rank constraint, which can mine the underlying geometric information from multi-view data.
We also design an encoder-decoder paradigm to collaborate the multiple affinity graphs, which can learn a unified binary code effectively.
arXiv Detail & Related papers (2023-01-06T12:43:13Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Rate Coding or Direct Coding: Which One is Better for Accurate, Robust,
and Energy-efficient Spiking Neural Networks? [4.872468969809081]
Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert an image into temporal binary spikes.
Among them, rate coding and direct coding are regarded as prospective candidates for building a practical SNN system.
We conduct a comprehensive analysis of the two codings from three perspectives: accuracy, adversarial robustness, and energy-efficiency.
arXiv Detail & Related papers (2022-01-31T16:18:07Z) - Shuffle and Learn: Minimizing Mutual Information for Unsupervised
Hashing [4.518427368603235]
Unsupervised binary representation allows fast data retrieval without any annotations.
Conflicts in binary space are one of the major barriers to high-performance unsupervised hashing.
New relaxation method called Shuffle and Learn is proposed to tackle code conflicts in the unsupervised hash.
arXiv Detail & Related papers (2020-11-20T07:14:55Z) - Self-Supervised Bernoulli Autoencoders for Semi-Supervised Hashing [1.8899300124593648]
This paper investigates the robustness of hashing methods based on variational autoencoders to the lack of supervision.
We propose a novel supervision method in which the model uses its label distribution predictions to implement the pairwise objective.
Our experiments show that both methods can significantly increase the hash codes' quality.
arXiv Detail & Related papers (2020-07-17T07:47:10Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.