BPLF: A Bi-Parallel Linear Flow Model for Facial Expression Generation
from Emotion Set Images
- URL: http://arxiv.org/abs/2106.07563v1
- Date: Thu, 27 May 2021 09:37:09 GMT
- Title: BPLF: A Bi-Parallel Linear Flow Model for Facial Expression Generation
from Emotion Set Images
- Authors: Gao Xu (1), Yuanpeng Long (2), Siwei Liu (1), Lijia Yang (1), Shimei
Xu (3), Xiaoming Yao (1,3), Kunxian Shu (1) ((1) School of Computer Science
and Technology, Chongqing Key Laboratory on Big Data for Bio Intelligence,
Chongqing University of Posts and Telecommunications, Chongqing, China, (2)
School of Economic Information Engineering, Southwestern University of
Finance and Economics, Chengdu, China (3) 51yunjian.com, Hetie International
Square, Chengdu, Sichuan, China)
- Abstract summary: Flow-based generative model is a deep learning generative model, which obtains the ability to generate data by explicitly learning the data distribution.
In this paper, a bi-parallel linear flow model for facial emotion generation from emotion set images is constructed.
This paper sorted out the current public data set of facial emotion images, made a new emotion data, and verified the model through this data set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The flow-based generative model is a deep learning generative model, which
obtains the ability to generate data by explicitly learning the data
distribution. Theoretically its ability to restore data is stronger than other
generative models. However, its implementation has many limitations, including
limited model design, too many model parameters and tedious calculation. In
this paper, a bi-parallel linear flow model for facial emotion generation from
emotion set images is constructed, and a series of improvements have been made
in terms of the expression ability of the model and the convergence speed in
training. The model is mainly composed of several coupling layers superimposed
to form a multi-scale structure, in which each coupling layer contains 1*1
reversible convolution and linear operation modules. Furthermore, this paper
sorted out the current public data set of facial emotion images, made a new
emotion data, and verified the model through this data set. The experimental
results show that, under the traditional convolutional neural network, the
3-layer 3*3 convolution kernel is more conducive to extracte the features of
the face images. The introduction of principal component decomposition can
improve the convergence speed of the model.
Related papers
- A survey of probabilistic generative frameworks for molecular simulations [0.0]
Generative artificial intelligence is now a widely used tool in molecular science.
We introduce and explain several classes of generative models, broadly sorted into two categories: flow-based models and diffusion models.
We examine their accuracy, computational cost, and generation speed across datasets with tunable dimensionality, complexity, and modal asymmetry.
arXiv Detail & Related papers (2024-11-14T12:05:08Z) - Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - MaxFusion: Plug&Play Multi-Modal Generation in Text-to-Image Diffusion Models [34.611309081801345]
Large diffusion-based Text-to-Image (T2I) models have shown impressive generative powers for text-to-image generation.
In this paper, we propose a novel strategy to scale a generative model across new tasks with minimal compute.
arXiv Detail & Related papers (2024-04-15T17:55:56Z) - Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - A Phase Transition in Diffusion Models Reveals the Hierarchical Nature
of Data [55.748186000425996]
Recent advancements show that diffusion models can generate high-quality images.
We study this phenomenon in a hierarchical generative model of data.
Our analysis characterises the relationship between time and scale in diffusion models.
arXiv Detail & Related papers (2024-02-26T19:52:33Z) - Make-A-Shape: a Ten-Million-scale 3D Shape Model [52.701745578415796]
This paper introduces Make-A-Shape, a new 3D generative model designed for efficient training on a vast scale.
We first innovate a wavelet-tree representation to compactly encode shapes by formulating the subband coefficient filtering scheme.
We derive the subband adaptive training strategy to train our model to effectively learn to generate coarse and detail wavelet coefficients.
arXiv Detail & Related papers (2024-01-20T00:21:58Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Flow-based Generative Models for Learning Manifold to Manifold Mappings [39.60406116984869]
We introduce three kinds of invertible layers for manifold-valued data, which are analogous to their functionality in flow-based generative models.
We show promising results where we can reliably and accurately reconstruct brain images of a field of orientation distribution functions.
arXiv Detail & Related papers (2020-12-18T02:19:18Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.