Automatic Measures for Evaluating Generative Design Methods for
Architects
- URL: http://arxiv.org/abs/2303.11483v1
- Date: Mon, 20 Mar 2023 22:34:57 GMT
- Title: Automatic Measures for Evaluating Generative Design Methods for
Architects
- Authors: Eric Yeh, Briland Hitaj, Vidyasagar Sadhu, Anirban Roy, Takuma
Nakabayashi, Yoshito Tsuji
- Abstract summary: We describe the expectations architects have for design proposals from conceptual sketches.
We evaluate several image-to-image generative methods that may address these criteria.
- Score: 2.4752678938561634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent explosion of high-quality image-to-image methods has prompted
interest in applying image-to-image methods towards artistic and design tasks.
Of interest for architects is to use these methods to generate design proposals
from conceptual sketches, usually hand-drawn sketches that are quickly
developed and can embody a design intent. More specifically, instantiating a
sketch into a visual that can be used to elicit client feedback is typically a
time consuming task, and being able to speed up this iteration time is
important. While the body of work in generative methods has been impressive,
there has been a mismatch between the quality measures used to evaluate the
outputs of these systems and the actual expectations of architects. In
particular, most recent image-based works place an emphasis on realism of
generated images. While important, this is one of several criteria architects
look for. In this work, we describe the expectations architects have for design
proposals from conceptual sketches, and identify corresponding automated
metrics from the literature. We then evaluate several image-to-image generative
methods that may address these criteria and examine their performance across
these metrics. From these results, we identify certain challenges with
hand-drawn conceptual sketches and describe possible future avenues of
investigation to address them.
Related papers
- Machine Apophenia: The Kaleidoscopic Generation of Architectural Images [11.525355831490828]
This study investigates the application of generative artificial intelligence in architectural design.
We present a novel methodology that combines multiple neural networks to create an unsupervised and unmoderated stream of unique architectural images.
arXiv Detail & Related papers (2024-07-12T11:11:19Z) - CAD-Prompted Generative Models: A Pathway to Feasible and Novel Engineering Designs [4.806185947218336]
This paper introduces a method that improves the design feasibility by prompting the generation with feasible CAD images.
Results demonstrate that the CAD image prompting successfully helps text-to-image models like Stable Diffusion 2.1 create visibly more feasible design images.
arXiv Detail & Related papers (2024-07-11T17:07:32Z) - A Survey on Quality Metrics for Text-to-Image Models [9.753473063305503]
We provide an overview of existing text-to-image quality metrics addressing their nuances and the need for alignment with human preferences.
We propose a new taxonomy for categorizing these metrics, which is grounded in the assumption that there are two main quality criteria, namely compositionality and generality.
We derive guidelines for practitioners conducting text-to-image evaluation, discuss open challenges of evaluation mechanisms, and surface limitations of current metrics.
arXiv Detail & Related papers (2024-03-18T14:24:20Z) - HAIFIT: Human-Centered AI for Fashion Image Translation [6.034505799418777]
We introduce HAIFIT, a novel approach that transforms sketches into high-fidelity, lifelike clothing images.
Our method excels in preserving the distinctive style and intricate details essential for fashion design applications.
arXiv Detail & Related papers (2024-03-13T16:06:07Z) - Unifying Image Processing as Visual Prompting Question Answering [62.84955983910612]
Image processing is a fundamental task in computer vision, which aims at enhancing image quality and extracting essential features for subsequent vision applications.
Traditionally, task-specific models are developed for individual tasks and designing such models requires distinct expertise.
We propose a universal model for general image processing that covers image restoration, image enhancement, image feature extraction tasks.
arXiv Detail & Related papers (2023-10-16T15:32:57Z) - Taming Encoder for Zero Fine-tuning Image Customization with
Text-to-Image Diffusion Models [55.04969603431266]
This paper proposes a method for generating images of customized objects specified by users.
The method is based on a general framework that bypasses the lengthy optimization required by previous approaches.
We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity.
arXiv Detail & Related papers (2023-04-05T17:59:32Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - Detecting Visual Design Principles in Art and Architecture through Deep
Convolutional Neural Networks [0.0]
This research aims at a neural network model, which recognizes and classifies the design principles over different domains.
The proposed model learns from the knowledge of myriads of original designs, by capturing the underlying shared patterns.
arXiv Detail & Related papers (2021-08-09T14:00:17Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.