Training Invertible Neural Networks as Autoencoders
- URL: http://arxiv.org/abs/2303.11239v2
- Date: Tue, 21 Mar 2023 12:43:11 GMT
- Title: Training Invertible Neural Networks as Autoencoders
- Authors: The-Gia Leo Nguyen, Lynton Ardizzone, Ullrich K\"othe
- Abstract summary: We present methods to train Invertible Neural Networks (INNs) as (variational) autoencoders which we call INN (variational) autoencoders.
Our experiments on MNIST, CIFAR and CelebA show that for low bottleneck sizes our INN autoencoder results similar to the classical autoencoder.
- Score: 3.867363075280544
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autoencoders are able to learn useful data representations in an unsupervised
matter and have been widely used in various machine learning and computer
vision tasks. In this work, we present methods to train Invertible Neural
Networks (INNs) as (variational) autoencoders which we call INN (variational)
autoencoders. Our experiments on MNIST, CIFAR and CelebA show that for low
bottleneck sizes our INN autoencoder achieves results similar to the classical
autoencoder. However, for large bottleneck sizes our INN autoencoder
outperforms its classical counterpart. Based on the empirical results, we
hypothesize that INN autoencoders might not have any intrinsic information loss
and thereby are not bounded to a maximal number of layers (depth) after which
only suboptimal results can be achieved.
Related papers
- On the Design and Performance of Machine Learning Based Error Correcting Decoders [3.8289109929360245]
We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance.
We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT)
arXiv Detail & Related papers (2024-10-21T11:23:23Z) - Optimizing Serially Concatenated Neural Codes with Classical Decoders [8.692972779213932]
We show that a classical decoding algorithm is applied to a non-trivial, real-valued neural code.
As the BCJR algorithm is fully differentiable, it is possible to train, or fine-tune, the neural encoder in an end-to-end fashion.
arXiv Detail & Related papers (2022-12-20T15:40:08Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - A Variational Auto-Encoder Approach for Image Transmission in Wireless
Channel [4.82810058837951]
We investigate the performance of variational auto-encoders and compare the results with standard auto-encoders.
Our experiments demonstrate that the SSIM metric visually improves the quality of the reconstructed images at the receiver.
arXiv Detail & Related papers (2020-10-08T13:35:38Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.