LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
- URL: http://arxiv.org/abs/2106.01487v1
- Date: Wed, 2 Jun 2021 21:57:52 GMT
- Title: LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
- Authors: Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani,
Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
- Abstract summary: We propose a novel method for Learning Low-dimensional binary Codes (LLC) for instances as well as classes.
Our method does not require any side-information, like annotated attributes or label meta-data.
We demonstrate that the learnt codes capture intrinsically important features in the data, by discovering an intuitive taxonomy over classes.
- Score: 55.32790803903619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning binary representations of instances and classes is a classical
problem with several high potential applications. In modern settings, the
compression of high-dimensional neural representations to low-dimensional
binary codes is a challenging task and often require large bit-codes to be
accurate. In this work, we propose a novel method for Learning Low-dimensional
binary Codes (LLC) for instances as well as classes. Our method does not
require any side-information, like annotated attributes or label meta-data, and
learns extremely low-dimensional binary codes (~20 bits for ImageNet-1K). The
learnt codes are super-efficient while still ensuring nearly optimal
classification accuracy for ResNet50 on ImageNet-1K. We demonstrate that the
learnt codes capture intrinsically important features in the data, by
discovering an intuitive taxonomy over classes. We further quantitatively
measure the quality of our codes by applying it to the efficient image
retrieval as well as out-of-distribution (OOD) detection problems. For
ImageNet-100 retrieval problem, our learnt binary codes outperform 16 bit
HashNet using only 10 bits and also are as accurate as 10 dimensional real
representations. Finally, our learnt binary codes can perform OOD detection,
out-of-the-box, as accurately as a baseline that needs ~3000 samples to tune
its threshold, while we require none. Code and pre-trained models are available
at https://github.com/RAIVNLab/LLC.
Related papers
- How Far Have We Gone in Binary Code Understanding Using Large Language Models [51.527805834378974]
We propose a benchmark to evaluate the effectiveness of Large Language Models (LLMs) in binary code understanding.
Our evaluations reveal that existing LLMs can understand binary code to a certain extent, thereby improving the efficiency of binary code analysis.
arXiv Detail & Related papers (2024-04-15T14:44:08Z) - SkCoder: A Sketch-based Approach for Automatic Code Generation [44.39900916450189]
We propose a sketch-based code generation approach named SkCoder to mimic developers' code reuse behavior.
Given a natural language requirement, SkCoder retrieves a similar code snippet, extracts relevant parts as a code sketch, and edits the sketch into the desired code.
Experimental results show that SkCoder can generate more correct programs, and outperforms the state-of-the-art - CodeT5-base by 30.30%, 35.39%, and 29.62% on three datasets.
arXiv Detail & Related papers (2023-02-13T07:05:39Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Rate Coding or Direct Coding: Which One is Better for Accurate, Robust,
and Energy-efficient Spiking Neural Networks? [4.872468969809081]
Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert an image into temporal binary spikes.
Among them, rate coding and direct coding are regarded as prospective candidates for building a practical SNN system.
We conduct a comprehensive analysis of the two codings from three perspectives: accuracy, adversarial robustness, and energy-efficiency.
arXiv Detail & Related papers (2022-01-31T16:18:07Z) - InferCode: Self-Supervised Learning of Code Representations by
Predicting Subtrees [17.461451218469062]
This paper proposes InferCode to overcome the limitation by adapting the self-language learning mechanism to build source code model.
Subtrees in ASTs are treated with InferCode as the labels for training code representations without any human labeling effort or the overhead of expensive graph construction.
Compared to previous code learning techniques applied to the same downstream tasks, such as Code2Vec, Code2Seq, ASTNN, higher performance results are achieved using our pre-trained InferCode model.
arXiv Detail & Related papers (2020-12-13T10:33:41Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z) - Towards Demystifying Dimensions of Source Code Embeddings [5.211235558099913]
We present our preliminary results towards better understanding the contents of code2vec neural source code embeddings.
Our results suggest that the handcrafted features can perform very close to the highly-dimensional code2vec embeddings.
We also find that the code2vec embeddings are more resilient to the removal of dimensions with low information gains than the handcrafted features.
arXiv Detail & Related papers (2020-08-29T21:59:11Z) - A Little Bit More: Bitplane-Wise Bit-Depth Recovery [43.99368427233748]
We propose a training and inference strategy that recovers the residual image bitplane-by-bitplane.
Our bitplane-wise learning framework has the advantage of allowing for multiple levels of supervision during training and is able to obtain state-of-the-art results.
arXiv Detail & Related papers (2020-05-03T14:06:33Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.