An attempt to generate new bridge types from latent space of
energy-based model
- URL: http://arxiv.org/abs/2401.17657v1
- Date: Wed, 31 Jan 2024 08:21:35 GMT
- Title: An attempt to generate new bridge types from latent space of
energy-based model
- Authors: Hongjun Zhang
- Abstract summary: Train energy function on symmetric structured image dataset of three span beam bridge, arch bridge, cable-stayed bridge, and suspension bridge.
Langevin dynamics technology to generate a new sample with low energy value.
- Score: 2.05750372679553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Use energy-based model for bridge-type innovation. The loss function is
explained by the game theory, the logic is clear and the formula is simple and
clear. Thus avoid the use of maximum likelihood estimation to explain the loss
function and eliminate the need for Monte Carlo methods to solve the normalized
denominator. Assuming that the bridge-type population follows a Boltzmann
distribution, a neural network is constructed to represent the energy function.
Use Langevin dynamics technology to generate a new sample with low energy
value, thus a generative model of bridge-type based on energy is established.
Train energy function on symmetric structured image dataset of three span beam
bridge, arch bridge, cable-stayed bridge, and suspension bridge to accurately
calculate the energy values of real and fake samples. Sampling from latent
space, using gradient descent algorithm, the energy function transforms the
sampling points into low energy score samples, thereby generating new bridge
types different from the dataset. Due to unstable and slow training in this
attempt, the possibility of generating new bridge types is rare and the image
definition of generated images is low.
Related papers
- Variational Potential Flow: A Novel Probabilistic Framework for Energy-Based Generative Modelling [10.926841288976684]
We present a novel energy-based generative framework, Variational Potential Flow (VAPO)
VAPO aims to learn a potential energy function whose gradient (flow) guides the prior samples, so that their density evolution closely follows an approximate data likelihood homotopy.
Images can be generated after training the potential energy, by initializing the samples from Gaussian prior and solving the ODE governing the potential flow on a fixed time interval.
arXiv Detail & Related papers (2024-07-21T18:08:12Z) - An attempt to generate new bridge types from latent space of denoising
diffusion Implicit model [2.05750372679553]
Use denoising diffusion implicit model for bridge-type innovation.
Process of adding noise and denoising to an image can be likened to the process of a corpse rotting and a detective restoring the scene of a victim being killed, to help beginners understand.
arXiv Detail & Related papers (2024-02-11T08:54:37Z) - An attempt to generate new bridge types from latent space of generative
flow [2.05750372679553]
The basic principle of normalizing flow is introduced in a simple and concise manner.
Treating the dataset as a sample from the population, obtaining normalizing flow is essentially through sampling surveys.
Using symmetric structured image dataset of three-span beam bridge, arch bridge, cable-stayed bridge and reversible suspension bridge, constructing and normalizing flow based on the Glow API in the library.
arXiv Detail & Related papers (2024-01-18T06:26:44Z) - An attempt to generate new bridge types from latent space of PixelCNN [2.05750372679553]
PixelCNN can capture the statistical structure of the images and calculate the probability distribution of the next pixel.
From the obtained latent space sampling, new bridge types different from the training dataset can be generated.
arXiv Detail & Related papers (2024-01-11T15:06:25Z) - An attempt to generate new bridge types from latent space of generative
adversarial network [2.05750372679553]
Symmetric structured image dataset of three-span beam bridge, arch bridge, cable-stayed bridge and suspension bridge are used.
Based on Python programming language, and Keras deep learning platform framework, generative adversarial network is constructed and trained.
arXiv Detail & Related papers (2024-01-01T08:46:29Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Controllable and Compositional Generation with Latent-Space Energy-Based
Models [60.87740144816278]
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications.
In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes.
By composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
arXiv Detail & Related papers (2021-10-21T03:31:45Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets
for 3D Generation, Reconstruction and Classification [136.57669231704858]
We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model.
We call our model the Generative PointNet because it can be derived from the discriminative PointNet.
arXiv Detail & Related papers (2020-04-02T23:08:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.