Vector Embeddings with Subvector Permutation Invariance using a Triplet
Enhanced Autoencoder
- URL: http://arxiv.org/abs/2011.09550v1
- Date: Wed, 18 Nov 2020 21:24:07 GMT
- Title: Vector Embeddings with Subvector Permutation Invariance using a Triplet
Enhanced Autoencoder
- Authors: Mark Alan Matties
- Abstract summary: In this paper, we use an autoencoder enhanced with triplet loss to promote the clustering of vectors that are related through permutations of constituent subvectors.
We can then use these invariant embeddings as inputs to other problems, like classification and clustering, and improve detection accuracy in those problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of deep neural network (DNN) autoencoders (AEs) has recently exploded
due to their wide applicability. However, the embedding representation produced
by a standard DNN AE that is trained to minimize only the reconstruction error
does not always reveal more subtle patterns in the data. Sometimes, the
autoencoder needs further direction in the form of one or more additional loss
functions. In this paper, we use an autoencoder enhanced with triplet loss to
promote the clustering of vectors that are related through permutations of
constituent subvectors. With this approach, we can create an embedding of the
vector that is nearly invariant to such permutations. We can then use these
invariant embeddings as inputs to other problems, like classification and
clustering, and improve detection accuracy in those problems.
Related papers
- Restructuring Vector Quantization with the Rotation Trick [36.03697966463205]
Vector Quantized Variational AutoEncoders (VQ-VAEs) are designed to compress a continuous input to a discrete latent space and reconstruct it with minimal distortion.
As vector quantization is non-differentiable, the gradient to the encoder flows around the vector quantization layer rather than through it in a straight-through approximation.
We propose a way to propagate gradients through the vector quantization layer of VQ-VAEs.
arXiv Detail & Related papers (2024-10-08T23:39:34Z) - Breaking the Attention Bottleneck [0.0]
This paper develops a generative function as attention or activation replacement.
It still has the auto-regressive character by comparing each token with the previous one.
The concept of attention replacement is distributed under the AGPL v3 license at https://gitlab.com/Bachstelzecausal_generation.
arXiv Detail & Related papers (2024-06-16T12:06:58Z) - Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds [3.180674374101366]
Rank Reduction Autoencoder (RRAE) is an autoencoder with an enlarged latent space.
Two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space.
We show the efficiency of our formulations by using them for tasks and comparing the results to other autoencoders.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - GEC-DePenD: Non-Autoregressive Grammatical Error Correction with
Decoupled Permutation and Decoding [52.14832976759585]
Grammatical error correction (GEC) is an important NLP task that is usually solved with autoregressive sequence-to-sequence models.
We propose a novel non-autoregressive approach to GEC that decouples the architecture into a permutation network.
We show that the resulting network improves over previously known non-autoregressive methods for GEC.
arXiv Detail & Related papers (2023-11-14T14:24:36Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for
Multi-Agent Learning [7.22614468437919]
We introduce a Permutation-Invariant Set Autoencoder (PISA)
PISA produces encodings with significantly lower reconstruction error than existing baselines.
We demonstrate its usefulness in a multi-agent application.
arXiv Detail & Related papers (2023-02-24T18:59:13Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Metalearning: Sparse Variable-Structure Automata [0.0]
We propose a metalearning approach to increase the number of basis vectors used in dynamic sparse coding vectors on the fly.
An actor-critic algorithm is deployed to automatically choose an appropriate dimension for feature regarding the required level of accuracy.
arXiv Detail & Related papers (2021-01-30T21:32:23Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - RE-MIMO: Recurrent and Permutation Equivariant Neural MIMO Detection [85.44877328116881]
We present a novel neural network for symbol detection in wireless communication systems.
It is motivated by several important considerations in wireless communication systems.
We compare its performance against existing methods and the results show the ability of our network to efficiently handle a variable number of transmitters.
arXiv Detail & Related papers (2020-06-30T22:43:01Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.