Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network?
- URL: http://arxiv.org/abs/2310.09806v1
- Date: Sun, 15 Oct 2023 11:41:54 GMT
- Title: Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network?
- Authors: Renyang Liu, Jun Zhao, Xing Chu, Yu Liang, Wei Zhou, Jing He
- Abstract summary: Recent progress shows that neural networks can partly replace traditional data structures.
We propose a novel learning locality-sensitive hashing, called LLSH, to map high-dimensional data to low-dimensional space.
The proposed LLSH demonstrate the feasibility of replacing the hash index with learning-based neural networks.
- Score: 9.940726521176499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of GPU (Graphics Processing Unit) technologies and
neural networks, we can explore more appropriate data structures and
algorithms. Recent progress shows that neural networks can partly replace
traditional data structures. In this paper, we proposed a novel DNN (Deep
Neural Network)-based learned locality-sensitive hashing, called LLSH, to
efficiently and flexibly map high-dimensional data to low-dimensional space.
LLSH replaces the traditional LSH (Locality-sensitive Hashing) function
families with parallel multi-layer neural networks, which reduces the time and
memory consumption and guarantees query accuracy simultaneously. The proposed
LLSH demonstrate the feasibility of replacing the hash index with
learning-based neural networks and open a new door for developers to design and
configure data organization more accurately to improve information-searching
performance. Extensive experiments on different types of datasets show the
superiority of the proposed method in query accuracy, time consumption, and
memory usage.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - YFlows: Systematic Dataflow Exploration and Code Generation for
Efficient Neural Network Inference using SIMD Architectures on CPUs [3.1445034800095413]
We address the challenges associated with deploying neural networks on CPUs.
Our novel approach is to use the dataflow of a neural network to explore data reuse opportunities.
Our results show that the dataflow that keeps outputs in SIMD registers consistently yields the best performance.
arXiv Detail & Related papers (2023-10-01T05:11:54Z) - Ghost-dil-NetVLAD: A Lightweight Neural Network for Visual Place Recognition [3.6249801498927923]
We propose a lightweight weakly supervised end-to-end neural network consisting of a front-ended perception model called GhostCNN and a learnable VLAD layer as a back-end.
To enhance our proposed lightweight model further, we add dilated convolutions to the Ghost module to get features containing more spatial semantic information, improving accuracy.
arXiv Detail & Related papers (2021-12-22T06:05:02Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Wireless Localisation in WiFi using Novel Deep Architectures [4.541069830146568]
This paper studies the indoor localisation of WiFi devices based on a commodity chipset and standard channel sounding.
We present a novel shallow neural network (SNN) in which features are extracted from the channel state information corresponding to WiFi subcarriers received on different antennas.
arXiv Detail & Related papers (2020-10-16T22:48:29Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.