Conditional Generation of Synthetic Geospatial Images from Pixel-level
and Feature-level Inputs
- URL: http://arxiv.org/abs/2109.05201v1
- Date: Sat, 11 Sep 2021 06:58:19 GMT
- Title: Conditional Generation of Synthetic Geospatial Images from Pixel-level
and Feature-level Inputs
- Authors: Xuerong Xiao, Swetava Ganguli, Vipul Pandey
- Abstract summary: We present a conditional generative model, called VAE-Info-cGAN, for synthesizing semantically rich images simultaneously conditioned on a pixel-level condition (PLC) and a feature-level condition (FLC)
The proposed model can accurately generate various forms of macroscopic aggregates across different geographic locations while conditioned only on atemporal representation of the road network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Training robust supervised deep learning models for many geospatial
applications of computer vision is difficult due to dearth of class-balanced
and diverse training data. Conversely, obtaining enough training data for many
applications is financially prohibitive or may be infeasible, especially when
the application involves modeling rare or extreme events. Synthetically
generating data (and labels) using a generative model that can sample from a
target distribution and exploit the multi-scale nature of images can be an
inexpensive solution to address scarcity of labeled data. Towards this goal, we
present a deep conditional generative model, called VAE-Info-cGAN, that
combines a Variational Autoencoder (VAE) with a conditional Information
Maximizing Generative Adversarial Network (InfoGAN), for synthesizing
semantically rich images simultaneously conditioned on a pixel-level condition
(PLC) and a macroscopic feature-level condition (FLC). Dimensionally, the PLC
can only vary in the channel dimension from the synthesized image and is meant
to be a task-specific input. The FLC is modeled as an attribute vector in the
latent space of the generated image which controls the contributions of various
characteristic attributes germane to the target distribution. Experiments on a
GPS trajectories dataset show that the proposed model can accurately generate
various forms of spatiotemporal aggregates across different geographic
locations while conditioned only on a raster representation of the road
network. The primary intended application of the VAE-Info-cGAN is synthetic
data (and label) generation for targeted data augmentation for computer
vision-based modeling of problems relevant to geospatial analysis and remote
sensing.
Related papers
- BEHAVIOR Vision Suite: Customizable Dataset Generation via Simulation [57.40024206484446]
We introduce the BEHAVIOR Vision Suite (BVS), a set of tools and assets to generate fully customized synthetic data for systematic evaluation of computer vision models.
BVS supports a large number of adjustable parameters at the scene level.
We showcase three example application scenarios.
arXiv Detail & Related papers (2024-05-15T17:57:56Z) - Few-shot Image Generation via Information Transfer from the Built
Geodesic Surface [2.617962830559083]
We propose a method called Information Transfer from the Built Geodesic Surface (ITBGS)
With the FAGS module, a pseudo-source domain is created by projecting image features from the training dataset into the Pre-Shape Space.
We demonstrate that the proposed method consistently achieves optimal or comparable results across a diverse range of semantically distinct datasets.
arXiv Detail & Related papers (2024-01-03T13:57:09Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Hierarchical Graph-Convolutional Variational AutoEncoding for Generative
Modelling of Human Motion [1.2599533416395767]
Models of human motion commonly focus either on trajectory prediction or action classification but rarely both.
Here we propose a novel architecture based on hierarchical variational autoencoders and deep graph convolutional neural networks for generating a holistic model of action over multiple time-scales.
We show this Hierarchical Graph-conational Varivolutional Autoencoder (HG-VAE) to be capable of generating coherent actions, detecting out-of-distribution data, and imputing missing data by gradient ascent on the model's posterior.
arXiv Detail & Related papers (2021-11-24T16:21:07Z) - VAE-Info-cGAN: Generating Synthetic Images by Combining Pixel-level and
Feature-level Geospatial Conditional Inputs [0.0]
We present a conditional generative model for synthesizing semantically rich images simultaneously conditioned on a pixellevel (PLC) and a featurelevel condition (FLC)
Experiments on a GPS dataset show that the proposed model can accurately generate various forms of macroscopic aggregates across different geographic locations.
arXiv Detail & Related papers (2020-12-08T03:46:19Z) - Generating Annotated High-Fidelity Images Containing Multiple Coherent
Objects [10.783993190686132]
We propose a multi-object generation framework that can synthesize images with multiple objects without explicitly requiring contextual information.
We demonstrate how coherency and fidelity are preserved with our method through experiments on the Multi-MNIST and CLEVR datasets.
arXiv Detail & Related papers (2020-06-22T11:33:55Z) - Contextual Encoder-Decoder Network for Visual Saliency Prediction [42.047816176307066]
We propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task.
We combine the resulting representations with global scene information for accurately predicting visual saliency.
Compared to state of the art approaches, the network is based on a lightweight image classification backbone.
arXiv Detail & Related papers (2019-02-18T16:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.