Feature Alignment for Approximated Reversibility in Neural Networks
- URL: http://arxiv.org/abs/2106.12562v1
- Date: Wed, 23 Jun 2021 17:42:47 GMT
- Title: Feature Alignment for Approximated Reversibility in Neural Networks
- Authors: Tiago de Souza Farias and Jonas Maziero
- Abstract summary: We introduce feature alignment, a technique for obtaining approximate reversibility in artificial neural networks.
We show that the technique can be modified for training neural networks locally, saving computational memory resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce feature alignment, a technique for obtaining approximate
reversibility in artificial neural networks. By means of feature extraction, we
can train a neural network to learn an estimated map for its reverse process
from outputs to inputs. Combined with variational autoencoders, we can generate
new samples from the same statistics as the training data. Improvements of the
results are obtained by using concepts from generative adversarial networks.
Finally, we show that the technique can be modified for training neural
networks locally, saving computational memory resources. Applying these
techniques, we report results for three vision generative tasks: MNIST,
CIFAR-10, and celebA.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - NeRN -- Learning Neural Representations for Neural Networks [3.7384109981836153]
We show that, when adapted correctly, neural representations can be used to represent the weights of a pre-trained convolutional neural network.
Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network.
We present two applications using NeRN, demonstrating the capabilities of the learned representations.
arXiv Detail & Related papers (2022-12-27T17:14:44Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.