Autocodificadores Variacionales (VAE) Fundamentos Te\'oricos y
Aplicaciones
- URL: http://arxiv.org/abs/2302.09363v1
- Date: Sat, 18 Feb 2023 15:29:55 GMT
- Title: Autocodificadores Variacionales (VAE) Fundamentos Te\'oricos y
Aplicaciones
- Authors: Jordi de la Torre
- Abstract summary: VAEs are probabilistic graphical models based on neural networks.
This article has been written in Spanish to facilitate the arrival of this scientific knowledge to the Spanish-speaking community.
- Score: 0.40611352512781856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: VAEs are probabilistic graphical models based on neural networks that allow
the coding of input data in a latent space formed by simpler probability
distributions and the reconstruction, based on such latent variables, of the
source data. After training, the reconstruction network, called decoder, is
capable of generating new elements belonging to a close distribution, ideally
equal to the original one. This article has been written in Spanish to
facilitate the arrival of this scientific knowledge to the Spanish-speaking
community.
Related papers
- Semantics Alignment via Split Learning for Resilient Multi-User Semantic
Communication [56.54422521327698]
Recent studies on semantic communication rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC)
Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics.
We propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques.
arXiv Detail & Related papers (2023-10-13T20:29:55Z) - Bayesian Flow Networks [4.585102332532472]
This paper introduces Bayesian Flow Networks (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference.
Starting from a simple prior and iteratively updating the two distributions yields a generative procedure similar to the reverse process of diffusion models.
BFNs achieve competitive log-likelihoods for image modelling on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion models on the text8 character-level language modelling task.
arXiv Detail & Related papers (2023-08-14T09:56:35Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Redes Generativas Adversarias (GAN) Fundamentos Te\'oricos y
Aplicaciones [0.40611352512781856]
Generative adversarial networks (GANs) are a method based on the training of two neural networks, one called generator and the other discriminator.
GANs have a wide range of applications in fields such as computer vision, semantic segmentation, time series synthesis, image editing, natural language processing, and image generation from text.
arXiv Detail & Related papers (2023-02-18T14:39:51Z) - Summarize and Generate to Back-translate: Unsupervised Translation of
Programming Languages [86.08359401867577]
Back-translation is widely known for its effectiveness for neural machine translation when little to no parallel data is available.
We propose performing back-translation via code summarization and generation.
We show that our proposed approach performs competitively with state-of-the-art methods.
arXiv Detail & Related papers (2022-05-23T08:20:41Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Feature Alignment for Approximated Reversibility in Neural Networks [0.0]
We introduce feature alignment, a technique for obtaining approximate reversibility in artificial neural networks.
We show that the technique can be modified for training neural networks locally, saving computational memory resources.
arXiv Detail & Related papers (2021-06-23T17:42:47Z) - Out-of-distribution Detection and Generation using Soft Brownian Offset
Sampling and Autoencoders [1.313418334200599]
Deep neural networks often suffer from overconfidence which can be partly remedied by improved out-of-distribution detection.
We propose a novel approach that allows for the generation of out-of-distribution datasets based on a given in-distribution dataset.
This new dataset can then be used to improve out-of-distribution detection for the given dataset and machine learning task at hand.
arXiv Detail & Related papers (2021-05-04T06:59:24Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - CiwGAN and fiwGAN: Encoding information in acoustic data to model
lexical learning with Generative Adversarial Networks [0.0]
Lexical learning is modeled as emergent from an architecture that forces a deep neural network to output data.
Networks trained on lexical items from TIMIT learn to encode unique information corresponding to lexical items in the form of categorical variables in their latent space.
We show that phonetic and phonological representations learned by the network can be productively recombined and directly paralleled to productivity in human speech.
arXiv Detail & Related papers (2020-06-04T15:33:55Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.