Set Based Stochastic Subsampling
- URL: http://arxiv.org/abs/2006.14222v4
- Date: Mon, 30 May 2022 05:59:57 GMT
- Title: Set Based Stochastic Subsampling
- Authors: Bruno Andreis, Seanie Lee, A. Tuan Nguyen, Juho Lee, Eunho Yang, Sung
Ju Hwang
- Abstract summary: We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
- Score: 85.5331107565578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep models are designed to operate on huge volumes of high dimensional data
such as images. In order to reduce the volume of data these models must
process, we propose a set-based two-stage end-to-end neural subsampling model
that is jointly optimized with an \textit{arbitrary} downstream task network
(e.g. classifier). In the first stage, we efficiently subsample
\textit{candidate elements} using conditionally independent Bernoulli random
variables by capturing coarse grained global information using set encoding
functions, followed by conditionally dependent autoregressive subsampling of
the candidate elements using Categorical random variables by modeling pair-wise
interactions using set attention networks in the second stage. We apply our
method to feature and instance selection and show that it outperforms the
relevant baselines under low subsampling rates on a variety of tasks including
image classification, image reconstruction, function reconstruction and
few-shot classification. Additionally, for nonparametric models such as Neural
Processes that require to leverage the whole training data at inference time,
we show that our method enhances the scalability of these models.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Generative Neural Fields by Mixtures of Neural Implicit Functions [43.27461391283186]
We propose a novel approach to learning the generative neural fields represented by linear combinations of implicit basis networks.
Our algorithm learns basis networks in the form of implicit neural representations and their coefficients in a latent space by either conducting meta-learning or adopting auto-decoding paradigms.
arXiv Detail & Related papers (2023-10-30T11:41:41Z) - Latent Autoregressive Source Separation [5.871054749661012]
This paper introduces vector-quantized Latent Autoregressive Source Separation (i.e., de-mixing an input signal into its constituent sources) without requiring additional gradient-based optimization or modifications of existing models.
Our separation method relies on the Bayesian formulation in which the autoregressive models are the priors, and a discrete (non-parametric) likelihood function is constructed by performing frequency counts over latent sums of addend tokens.
arXiv Detail & Related papers (2023-01-09T17:32:00Z) - ClusTR: Exploring Efficient Self-attention via Clustering for Vision
Transformers [70.76313507550684]
We propose a content-based sparse attention method, as an alternative to dense self-attention.
Specifically, we cluster and then aggregate key and value tokens, as a content-based method of reducing the total token count.
The resulting clustered-token sequence retains the semantic diversity of the original signal, but can be processed at a lower computational cost.
arXiv Detail & Related papers (2022-08-28T04:18:27Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Random Ferns for Semantic Segmentation of PolSAR Images [0.0]
This paper extends the Random Fern framework to the semantic segmentation of polarimetric synthetic aperture radar images.
Two distinct optimization strategies are proposed.
Experiments show that results can be achieved that are similar to a more complex Random Forest model.
arXiv Detail & Related papers (2022-02-07T20:22:57Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Contextual Encoder-Decoder Network for Visual Saliency Prediction [42.047816176307066]
We propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task.
We combine the resulting representations with global scene information for accurately predicting visual saliency.
Compared to state of the art approaches, the network is based on a lightweight image classification backbone.
arXiv Detail & Related papers (2019-02-18T16:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.