How to boost autoencoders?
- URL: http://arxiv.org/abs/2110.15307v1
- Date: Thu, 28 Oct 2021 17:21:25 GMT
- Title: How to boost autoencoders?
- Authors: Sai Krishna, Thulasi Tholeti, Sheetal Kalyani
- Abstract summary: We discuss the challenges associated with boosting autoencoders and propose a framework to overcome them.
The usefulness of the boosted ensemble is demonstrated in two applications that widely employ autoencoders: anomaly detection and clustering.
- Score: 13.166222736288432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoencoders are a category of neural networks with applications in numerous
domains and hence, improvement of their performance is gaining substantial
interest from the machine learning community. Ensemble methods, such as
boosting, are often adopted to enhance the performance of regular neural
networks. In this work, we discuss the challenges associated with boosting
autoencoders and propose a framework to overcome them. The proposed method
ensures that the advantages of boosting are realized when either output
(encoded or reconstructed) is used. The usefulness of the boosted ensemble is
demonstrated in two applications that widely employ autoencoders: anomaly
detection and clustering.
Related papers
- Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Surrogate Gradient Spiking Neural Networks as Encoders for Large
Vocabulary Continuous Speech Recognition [91.39701446828144]
We show that spiking neural networks can be trained like standard recurrent neural networks using the surrogate gradient method.
They have shown promising results on speech command recognition tasks.
In contrast to their recurrent non-spiking counterparts, they show robustness to exploding gradient problems without the need to use gates.
arXiv Detail & Related papers (2022-12-01T12:36:26Z) - Learning to Improve Code Efficiency [27.768476489523163]
We analyze a large competitive programming dataset from the Google Code Jam competition.
We find that efficient code is indeed rare, with a 2x difference between the median runtime and the 90th percentile of solutions.
We propose using machine learning to automatically provide prescriptive feedback in the form of hints, to guide programmers towards writing high-performance code.
arXiv Detail & Related papers (2022-08-09T01:28:30Z) - Efficient spike encoding algorithms for neuromorphic speech recognition [5.182266520875928]
Spiking Neural Networks (SNN) are very effective for neuromorphic processor implementations.
Real-valued signals are encoded as real-valued signals that are not well-suited to SNN.
In this paper, we study four spike encoding methods in the context of a speaker independent digit classification system.
arXiv Detail & Related papers (2022-07-14T17:22:07Z) - Anomaly Detection with Adversarially Learned Perturbations of Latent
Space [9.473040033926264]
Anomaly detection is to identify samples that do not conform to the distribution of the normal data.
In this work, we have designed an adversarial framework consisting of two competing components, an Adversarial Distorter, and an Autoencoder.
The proposed method outperforms the existing state-of-the-art methods in anomaly detection on image and video datasets.
arXiv Detail & Related papers (2022-07-03T19:32:00Z) - A New Clustering-Based Technique for the Acceleration of Deep
Convolutional Networks [2.7393821783237184]
Model Compression and Acceleration (MCA) techniques are used to transform large pre-trained networks into smaller models.
We propose a clustering-based approach that is able to increase the number of employed centroids/representatives.
This is achieved by imposing a special structure to the employed representatives, which is enabled by the particularities of the problem at hand.
arXiv Detail & Related papers (2021-07-19T18:22:07Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z) - Revisiting Role of Autoencoders in Adversarial Settings [32.22707594954084]
This paper presents the inherent property of adversarial robustness in the autoencoders.
We believe that our discovery of the adversarial robustness of the autoencoders can provide clues to the future research and applications for adversarial defense.
arXiv Detail & Related papers (2020-05-21T16:01:23Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.