DeepNAG: Deep Non-Adversarial Gesture Generation
- URL: http://arxiv.org/abs/2011.09149v1
- Date: Wed, 18 Nov 2020 08:00:12 GMT
- Title: DeepNAG: Deep Non-Adversarial Gesture Generation
- Authors: Mehran Maghoumi, Eugene M. Taranta II, Joseph J. LaViola Jr
- Abstract summary: generative adversarial networks (GANs) have shown superior image data augmentation performance.
GANs require simultaneous generator and discriminator network training.
We first discuss a novel, device-agnostic GAN model for gesture synthesis called DeepGAN.
- Score: 4.46895288699085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic data generation to improve classification performance (data
augmentation) is a well-studied problem. Recently, generative adversarial
networks (GAN) have shown superior image data augmentation performance, but
their suitability in gesture synthesis has received inadequate attention.
Further, GANs prohibitively require simultaneous generator and discriminator
network training. We tackle both issues in this work. We first discuss a novel,
device-agnostic GAN model for gesture synthesis called DeepGAN. Thereafter, we
formulate DeepNAG by introducing a new differentiable loss function based on
dynamic time warping and the average Hausdorff distance, which allows us to
train DeepGAN's generator without requiring a discriminator. Through
evaluations, we compare the utility of DeepGAN and DeepNAG against two
alternative techniques for training five recognizers using data augmentation
over six datasets. We further investigate the perceived quality of synthesized
samples via an Amazon Mechanical Turk user study based on the HYPE benchmark.
We find that DeepNAG outperforms DeepGAN in accuracy, training time (up to 17x
faster), and realism, thereby opening the door to a new line of research in
generator network design and training for gesture synthesis. Our source code is
available at https://www.deepnag.com.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training [20.03447539784024]
We propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints.
The process can be made efficient by identifying closely related datasets, or a friendly neighborhood'' of the target distribution.
We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets.
arXiv Detail & Related papers (2023-05-12T17:03:18Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems [16.360144499713524]
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
arXiv Detail & Related papers (2020-02-24T18:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.