Towards Goal, Feasibility, and Diversity-Oriented Deep Generative Models
in Design
- URL: http://arxiv.org/abs/2206.07170v1
- Date: Tue, 14 Jun 2022 20:57:23 GMT
- Title: Towards Goal, Feasibility, and Diversity-Oriented Deep Generative Models
in Design
- Authors: Lyle Regenwetter, Faez Ahmed
- Abstract summary: We present the first Deep Generative Model that simultaneously optimize for performance, feasibility, diversity, and target achievement.
Methods are tested on a challenging multi-objective bicycle frame design problem with skewed, multimodal data of different datatypes.
- Score: 4.091593765662773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Generative Machine Learning Models (DGMs) have been growing in
popularity across the design community thanks to their ability to learn and
mimic complex data distributions. DGMs are conventionally trained to minimize
statistical divergence between the distribution over generated data and
distribution over the dataset on which they are trained. While sufficient for
the task of generating "realistic" fake data, this objective is typically
insufficient for design synthesis tasks. Instead, design problems typically
call for adherence to design requirements, such as performance targets and
constraints. Advancing DGMs in engineering design requires new training
objectives which promote engineering design objectives. In this paper, we
present the first Deep Generative Model that simultaneously optimizes for
performance, feasibility, diversity, and target achievement. We benchmark
performance of the proposed method against several Deep Generative Models over
eight evaluation metrics that focus on feasibility, diversity, and satisfaction
of design performance targets. Methods are tested on a challenging
multi-objective bicycle frame design problem with skewed, multimodal data of
different datatypes. The proposed framework was found to outperform all Deep
Generative Models in six of eight metrics.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Generative Design through Quality-Diversity Data Synthesis and Language Models [5.196236145367301]
Two fundamental challenges face generative models in engineering applications: the acquisition of high-performing, diverse datasets, and the adherence to precise constraints in generated designs.
We propose a novel approach combining optimization, constraint satisfaction, and language models to tackle these challenges in architectural design.
arXiv Detail & Related papers (2024-05-16T11:30:08Z) - Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Beyond Statistical Similarity: Rethinking Metrics for Deep Generative
Models in Engineering Design [10.531935694354448]
This paper doubles as a review and practical guide to evaluation metrics for deep generative models (DGMs) in engineering design.
We first summarize the well-accepted classic' evaluation metrics for deep generative models grounded in machine learning theory.
Next, we curate a set of design-specific metrics which can be used for evaluating deep generative models.
arXiv Detail & Related papers (2023-02-06T16:34:16Z) - Design Target Achievement Index: A Differentiable Metric to Enhance Deep
Generative Models in Multi-Objective Inverse Design [4.091593765662773]
Design Target Achievement Index (DTAI) is a differentiable, tunable metric that scores a design's ability to achieve designer-specified minimum performance targets.
We apply DTAI to a Performance-Augmented Diverse GAN (PaDGAN) and demonstrate superior generative performance compared to a set of baseline Deep Generative Models.
arXiv Detail & Related papers (2022-05-06T04:14:34Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
for Zero-shot and Few-shot Tasks [73.63892022944198]
We present a generic perception architecture named Uni-Perceiver.
It processes a variety of modalities and tasks with unified modeling and shared parameters.
Results show that our pre-trained model without any tuning can achieve reasonable performance even on novel tasks.
arXiv Detail & Related papers (2021-12-02T18:59:50Z) - Deep Generative Models in Engineering Design: A Review [1.933681537640272]
We present a review and analysis of Deep Generative Learning models in engineering design.
Recent DGMs have shown promising results in design applications like structural optimization, materials design, and shape synthesis.
arXiv Detail & Related papers (2021-10-21T02:50:10Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.