At the Intersection of Deep Learning and Conceptual Art: The End of
Signature
- URL: http://arxiv.org/abs/2207.04312v1
- Date: Sat, 9 Jul 2022 17:58:01 GMT
- Title: At the Intersection of Deep Learning and Conceptual Art: The End of
Signature
- Authors: Divya Shanmugam, Katie Lewis, Jose Javier Gonzalez-Ortiz, Agnieszka
Kurant, John Guttag
- Abstract summary: Art was to reflect the fact that scientific discovery is often the result of many individual contributions, both acknowledged and unacknowledged.
Computer scientists developed generative models and a human-in-the-loop feedback process to work with the artist.
Large-scale steel, LED and neon light sculptures appear to sign two new buildings in Cambridge, MA.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MIT wanted to commission a large scale artwork that would serve to
'illuminate a new campus gateway, inaugurate a space of exchange between MIT
and Cambridge, and inspire our students, faculty, visitors, and the surrounding
community to engage with art in new ways and to have art be part of their daily
lives.' Among other things, the art was to reflect the fact that scientific
discovery is often the result of many individual contributions, both
acknowledged and unacknowledged. In this work, a group of computer scientists
collaborated with a conceptual artist to produce a collective signature, or a
signature learned from contributions of an entire community. After collecting
signatures from two communities -- the university, and the surrounding city --
the computer scientists developed generative models and a human-in-the-loop
feedback process to work with the artist create an original signature-like
structure representative of each community. These signatures are now
large-scale steel, LED and neon light sculptures that appear to sign two new
buildings in Cambridge, MA.
Related papers
- Compose Your Aesthetics: Empowering Text-to-Image Models with the Principles of Art [61.28133495240179]
We propose a novel task of aesthetics alignment which seeks to align user-specified aesthetics with the T2I generation output.
Inspired by how artworks provide an invaluable perspective to approach aesthetics, we codify visual aesthetics using the compositional framework artists employ.
We demonstrate that T2I DMs can effectively offer 10 compositional controls through user-specified PoA conditions.
arXiv Detail & Related papers (2025-03-15T06:58:09Z) - Evaluation of Architectural Synthesis Using Generative AI [49.1574468325115]
This paper presents a comparative evaluation of two systems: GPT-4o and Claude 3.5, in the task of architectural 3D synthesis.
We conduct a case study on two buildings from Palladio's Four Books of Architecture (1965): Villa Rotonda and Palazzo Porto.
We assess the systems' abilities in (1) interpreting 2D and 3D representations of buildings from drawings, (2) encoding the buildings into a CAD software script, and (3) self-improving based on outputs.
arXiv Detail & Related papers (2025-03-04T18:39:28Z) - CRAFT@Large: Building Community Through Co-Making [2.5569675122244475]
CRAFT@Large is an initiative launched by the Maker at Cornell Tech to create an inclusive environment for the exchange of ideas through making.
We challenge the traditional definition of community outreach performed by academic makerspaces.
Existing academic makerspaces often perform community engagement by only offering hourly, one-time workshops or by having community members provide a problem that is then used by students as a project assignment.
arXiv Detail & Related papers (2024-10-30T17:26:32Z) - Exploring the Potential of Large Language Models in Artistic Creation:
Collaboration and Reflection on Creative Programming [10.57792673254363]
We compare two common collaboration approaches: invoking the entire program and multiple subtasks.
Our findings exhibit artists' different stimulated reflections in two different methods.
Our work reveals the artistic potential of LLM in creative coding.
arXiv Detail & Related papers (2024-02-15T07:00:06Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - Rediscovering Ranganathan: A Prismatic View of His Life through the
Knowledge Graph Spectrum [0.0]
The present study puts forward a novel biographical knowledge graph (KG) on Prof. S. R. Ranganathan.
The KG was developed using a "facet-based methodology" at two levels: in the identification of the vital biographical aspects and the development of the ontological model.
arXiv Detail & Related papers (2023-11-23T04:29:18Z) - BioJam Camp: toward justice through bioengineering and biodesign
co-learning with youth [0.0]
BioJam is a political, artistic, and educational project in which Bay Area artists, scientists, and educators collaborate with youth and communities of color to address historical exclusion of their communities in STEM fields and reframe what science can be.
arXiv Detail & Related papers (2022-11-01T21:10:56Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - Art Creation with Multi-Conditional StyleGANs [81.72047414190482]
A human artist needs a combination of unique skills, understanding, and genuine intention to create artworks that evoke deep feelings and emotions.
We introduce a multi-conditional Generative Adversarial Network (GAN) approach trained on large amounts of human paintings to synthesize realistic-looking paintings that emulate human art.
arXiv Detail & Related papers (2022-02-23T20:45:41Z) - Emergent Graphical Conventions in a Visual Communication Game [80.79297387339614]
Humans communicate with graphical sketches apart from symbolic languages.
We take the very first step to model and simulate such an evolution process via two neural agents playing a visual communication game.
We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions.
arXiv Detail & Related papers (2021-11-28T18:59:57Z) - Creative Sketch Generation [48.16835161875747]
We introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations.
We propose DoodlerGAN -- a part-based Generative Adrial Network (GAN) -- to generate unseen compositions of novel part appearances.
Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches.
arXiv Detail & Related papers (2020-11-19T18:57:00Z) - Seeing the World in a Bag of Chips [73.561388215585]
We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.
Our contributions include 1) modeling highly specular objects, 2) modeling inter-reflections and Fresnel effects, and 3) enabling surface light field reconstruction with the same input needed to reconstruct shape alone.
arXiv Detail & Related papers (2020-01-14T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.