Goal-Directed Design Agents: Integrating Visual Imitation with One-Step
Lookahead Optimization for Generative Design
- URL: http://arxiv.org/abs/2110.03223v1
- Date: Thu, 7 Oct 2021 07:13:20 GMT
- Title: Goal-Directed Design Agents: Integrating Visual Imitation with One-Step
Lookahead Optimization for Generative Design
- Authors: Ayush Raina, Lucas Puentes, Jonathan Cagan, Christopher McComb
- Abstract summary: This note builds on DLAgents to develop goal-directed agents capable of enhancing learned strategies for sequentially generating designs.
Goal-directed DLAgents can employ human strategies learned from data along with optimizing an objective function.
This illustrates a design agent framework that can efficiently use feedback to not only enhance learned design strategies but also adapt to unseen design problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Engineering design problems often involve large state and action spaces along
with highly sparse rewards. Since an exhaustive search of those spaces is not
feasible, humans utilize relevant domain knowledge to condense the search
space. Previously, deep learning agents (DLAgents) were introduced to use
visual imitation learning to model design domain knowledge. This note builds on
DLAgents and integrates them with one-step lookahead search to develop
goal-directed agents capable of enhancing learned strategies for sequentially
generating designs. Goal-directed DLAgents can employ human strategies learned
from data along with optimizing an objective function. The visual imitation
network from DLAgents is composed of a convolutional encoder-decoder network,
acting as a rough planning step that is agnostic to feedback. Meanwhile, the
lookahead search identifies the fine-tuned design action guided by an
objective. These design agents are trained on an unconstrained truss design
problem that is modeled as a sequential, action-based configuration design
problem. The agents are then evaluated on two versions of the problem: the
original version used for training and an unseen constrained version with an
obstructed construction space. The goal-directed agents outperform the human
designers used to train the network as well as the previous objective-agnostic
versions of the agent in both scenarios. This illustrates a design agent
framework that can efficiently use feedback to not only enhance learned design
strategies but also adapt to unseen design problems.
Related papers
- G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks [14.024988515071431]
We introduce G-Designer, an adaptive, efficient, and robust solution for multi-agent deployment.
G-Designer dynamically designs task-aware, customized communication topologies.
arXiv Detail & Related papers (2024-10-15T17:01:21Z) - AgentSquare: Automatic LLM Agent Search in Modular Design Space [16.659969168343082]
Large Language Models (LLMs) have led to a rapid growth of agentic systems capable of handling a wide range of complex tasks.
We introduce a new research problem: Modularized LLM Agent Search (MoLAS)
arXiv Detail & Related papers (2024-10-08T15:52:42Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Learning to design without prior data: Discovering generalizable design
strategies using deep learning and tree search [0.0]
Building an AI agent that can design on its own has been a goal since the 1980s.
Deep learning has shown the ability to learn from large-scale data, enabling significant advances in data-driven design.
This paper presents a framework to self-learn high-performing and generalizable problem-solving behavior in an arbitrary problem space.
arXiv Detail & Related papers (2022-11-28T05:00:58Z) - Multi-Agent Embodied Visual Semantic Navigation with Scene Prior
Knowledge [42.37872230561632]
In visual semantic navigation, the robot navigates to a target object with egocentric visual observations and the class label of the target is given.
Most of the existing models are only effective for single-agent navigation, and a single agent has low efficiency and poor fault tolerance when completing more complicated tasks.
We propose the multi-agent visual semantic navigation, in which multiple agents collaborate with others to find multiple target objects.
arXiv Detail & Related papers (2021-09-20T13:31:03Z) - A Design Space Study for LISTA and Beyond [79.76740811464597]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms.
This paper revisits the role of unrolling as a design approach for deep networks, to what extent its resulting special architecture is superior, and can we find better?
Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models.
arXiv Detail & Related papers (2021-04-08T23:01:52Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Improving Target-driven Visual Navigation with Attention on 3D Spatial
Relationships [52.72020203771489]
We investigate target-driven visual navigation using deep reinforcement learning (DRL) in 3D indoor scenes.
Our proposed method combines visual features and 3D spatial representations to learn navigation policy.
Our experiments, performed in the AI2-THOR, show that our model outperforms the baselines in both SR and SPL metrics.
arXiv Detail & Related papers (2020-04-29T08:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.