Detecting Visual Design Principles in Art and Architecture through Deep
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2108.04048v1
- Date: Mon, 9 Aug 2021 14:00:17 GMT
- Title: Detecting Visual Design Principles in Art and Architecture through Deep
Convolutional Neural Networks
- Authors: Gozdenur Demir, Asli Cekmis, Vahit Bugra Yesilkaynak, Gozde Unal
- Abstract summary: This research aims at a neural network model, which recognizes and classifies the design principles over different domains.
The proposed model learns from the knowledge of myriads of original designs, by capturing the underlying shared patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visual design is associated with the use of some basic design elements and
principles. Those are applied by the designers in the various disciplines for
aesthetic purposes, relying on an intuitive and subjective process. Thus,
numerical analysis of design visuals and disclosure of the aesthetic value
embedded in them are considered as hard. However, it has become possible with
emerging artificial intelligence technologies. This research aims at a neural
network model, which recognizes and classifies the design principles over
different domains. The domains include artwork produced since the late 20th
century; professional photos; and facade pictures of contemporary buildings.
The data collection and curation processes, including the production of
computationally-based synthetic dataset, is genuine. The proposed model learns
from the knowledge of myriads of original designs, by capturing the underlying
shared patterns. It is expected to consolidate design processes by providing an
aesthetic evaluation of the visual compositions with objectivity.
Related papers
- KITTEN: A Knowledge-Intensive Evaluation of Image Generation on Visual Entities [93.74881034001312]
We conduct a systematic study on the fidelity of entities in text-to-image generation models.
We focus on their ability to generate a wide range of real-world visual entities, such as landmark buildings, aircraft, plants, and animals.
Our findings reveal that even the most advanced text-to-image models often fail to generate entities with accurate visual details.
arXiv Detail & Related papers (2024-10-15T17:50:37Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Machine Apophenia: The Kaleidoscopic Generation of Architectural Images [11.525355831490828]
This study investigates the application of generative artificial intelligence in architectural design.
We present a novel methodology that combines multiple neural networks to create an unsupervised and unmoderated stream of unique architectural images.
arXiv Detail & Related papers (2024-07-12T11:11:19Z) - Deep Ensemble Art Style Recognition [2.3369294168789203]
Huge digitization of artworks during the last decades created the need for categorization, analysis, and management of huge amounts of data related to abstract concepts.
Recognition of various art features in artworks has gained attention in the deep learning society.
In this paper, we are concerned with the problem of art style recognition using deep networks.
arXiv Detail & Related papers (2024-05-19T21:26:11Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - Impressions: Understanding Visual Semiotics and Aesthetic Impact [66.40617566253404]
We present Impressions, a novel dataset through which to investigate the semiotics of images.
We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images.
This dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.
arXiv Detail & Related papers (2023-10-27T04:30:18Z) - Review of Large Vision Models and Visual Prompt Engineering [50.63394642549947]
Review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering.
We present influential large models in the visual domain and a range of prompt engineering methods employed on these models.
arXiv Detail & Related papers (2023-07-03T08:48:49Z) - Augmenting Character Designers Creativity Using Generative Adversarial
Networks [0.0]
Generative Adversarial Networks (GANs) continue to attract the attention of researchers in different fields.
Most recent GANs are focused on realism, however, generating hyper-realistic output is not a priority for some domains.
We present a comparison between different GAN architectures and their performance when trained from scratch on a new visual characters dataset.
We also explore alternative techniques, such as transfer learning and data augmentation, to overcome computational resource limitations.
arXiv Detail & Related papers (2023-05-28T10:52:03Z) - Learning Aesthetic Layouts via Visual Guidance [7.992550355579791]
We explore computational approaches for visual guidance to aid in creating pleasing art and graphic design.
We collected a dataset of art masterpieces and labeled the visual fixations with state-of-art vision models.
We clustered the visual guidance templates of the art masterpieces with unsupervised learning.
We show that the aesthetic visual guidance principles can be learned and integrated into a high-dimensional model and can be queried by the features of graphic elements.
arXiv Detail & Related papers (2021-07-13T17:46:42Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.