Unsupervised Representation Learning via Neural Activation Coding
- URL: http://arxiv.org/abs/2112.04014v1
- Date: Tue, 7 Dec 2021 21:59:45 GMT
- Title: Unsupervised Representation Learning via Neural Activation Coding
- Authors: Yookoon Park, Sangho Lee, Gunhee Kim, David M. Blei
- Abstract summary: We present neural activation coding (NAC) as a novel approach for learning deep representations from unlabeled data for downstream applications.
We show that NAC learns both continuous and discrete representations of data, which we respectively evaluate on two downstream tasks.
- Score: 66.65837512531729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present neural activation coding (NAC) as a novel approach for learning
deep representations from unlabeled data for downstream applications. We argue
that the deep encoder should maximize its nonlinear expressivity on the data
for downstream predictors to take full advantage of its representation power.
To this end, NAC maximizes the mutual information between activation patterns
of the encoder and the data over a noisy communication channel. We show that
learning for a noise-robust activation code increases the number of distinct
linear regions of ReLU encoders, hence the maximum nonlinear expressivity. More
interestingly, NAC learns both continuous and discrete representations of data,
which we respectively evaluate on two downstream tasks: (i) linear
classification on CIFAR-10 and ImageNet-1K and (ii) nearest neighbor retrieval
on CIFAR-10 and FLICKR-25K. Empirical results show that NAC attains better or
comparable performance on both tasks over recent baselines including SimCLR and
DistillHash. In addition, NAC pretraining provides significant benefits to the
training of deep generative models. Our code is available at
https://github.com/yookoon/nac.
Related papers
- Offline Writer Identification Using Convolutional Neural Network
Activation Features [6.589323210821262]
Convolutional neural networks (CNNs) have recently become the state-of-the-art tool for large-scale image classification.
In this work we propose the use of activation features from CNNs as local descriptors for writer identification.
We evaluate our method on two publicly available datasets: the ICDAR 2013 benchmark database and the CVL dataset.
arXiv Detail & Related papers (2024-02-26T21:16:14Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Learning Representation from Neural Fisher Kernel with Low-rank
Approximation [16.14794818755318]
We first define the Neural Fisher Kernel (NFK), which is the Fisher Kernel applied to neural networks.
We show that NFK can be computed for both supervised and unsupervised learning models.
We then propose an efficient algorithm that computes a low rank approximation of NFK, which scales to large datasets and networks.
arXiv Detail & Related papers (2022-02-04T02:28:02Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Coresets for Robust Training of Neural Networks against Noisy Labels [78.03027938765746]
We propose a novel approach with strong theoretical guarantees for robust training of deep networks trained with noisy labels.
We select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix.
Our experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance compared to state-of-the art.
arXiv Detail & Related papers (2020-11-15T04:58:11Z) - Dataset Meta-Learning from Kernel Ridge-Regression [18.253682891579402]
Kernel Inducing Points (KIP) can compress datasets by one or two orders of magnitude.
KIP-learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime.
arXiv Detail & Related papers (2020-10-30T18:54:04Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.