On the Suitability of Representations for Quality Diversity Optimization
of Shapes
- URL: http://arxiv.org/abs/2304.03520v1
- Date: Fri, 7 Apr 2023 07:34:23 GMT
- Title: On the Suitability of Representations for Quality Diversity Optimization
of Shapes
- Authors: Ludovico Scarton, Alexander Hagg
- Abstract summary: The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance.
This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The representation, or encoding, utilized in evolutionary algorithms has a
substantial effect on their performance. Examination of the suitability of
widely used representations for quality diversity optimization (QD) in robotic
domains has yielded inconsistent results regarding the most appropriate
encoding method. Given the domain-dependent nature of QD, additional evidence
from other domains is necessary. This study compares the impact of several
representations, including direct encoding, a dictionary-based representation,
parametric encoding, compositional pattern producing networks, and cellular
automata, on the generation of voxelized meshes in an architecture setting. The
results reveal that some indirect encodings outperform direct encodings and can
generate more diverse solution sets, especially when considering full
phenotypic diversity. The paper introduces a multi-encoding QD approach that
incorporates all evaluated representations in the same archive. Species of
encodings compete on the basis of phenotypic features, leading to an approach
that demonstrates similar performance to the best single-encoding QD approach.
This is noteworthy, as it does not always require the contribution of the
best-performing single encoding.
Related papers
- High Efficiency Image Compression for Large Visual-Language Models [14.484831372497437]
Large visual language models (LVLMs) have shown impressive performance and promising generalization capability in multi-modal tasks.
We propose a variable image compression framework consisting of a pre-editing module and an end-to-end to achieve promising rate-accuracy performance.
arXiv Detail & Related papers (2024-07-24T07:37:12Z) - Tabular Learning: Encoding for Entity and Context Embeddings [0.0]
Examining the effect of different encoding techniques on entity and context embeddings.
Applying different preprocessing methods and network architectures over several datasets resulted in a benchmark on how the encoders influence the learning outcome of the networks.
arXiv Detail & Related papers (2024-03-28T13:29:29Z) - Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding [90.77521413857448]
Deep generative models are anchored in three core capabilities -- generating new instances, reconstructing inputs, and learning compact representations.
We introduce Generalized generative adversarial-Decoding Diffusion Probabilistic Models (EDDPMs)
EDDPMs generalize the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding.
Experiments on text, proteins, and images demonstrate the flexibility to handle diverse data and tasks.
arXiv Detail & Related papers (2024-02-29T10:08:57Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - High-Quality Pluralistic Image Completion via Code Shared VQGAN [51.7805154545948]
We present a novel framework for pluralistic image completion that can achieve both high quality and diversity at much faster inference speed.
Our framework is able to learn semantically-rich discrete codes efficiently and robustly, resulting in much better image reconstruction quality.
arXiv Detail & Related papers (2022-04-05T01:47:35Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z) - Discovering Representations for Black-box Optimization [73.59962178534361]
We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
arXiv Detail & Related papers (2020-03-09T20:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.