Imagining a Future of Designing with AI: Dynamic Grounding, Constructive
Negotiation, and Sustainable Motivation
- URL: http://arxiv.org/abs/2402.07342v1
- Date: Mon, 12 Feb 2024 00:20:43 GMT
- Title: Imagining a Future of Designing with AI: Dynamic Grounding, Constructive
Negotiation, and Sustainable Motivation
- Authors: Priyan Vaithilingam, Ian Arawjo, Elena L. Glassman
- Abstract summary: We aim to isolate the new value large AI models can provide design compared to past technologies.
We arrive at three affordances that summarize latent qualities of natural language-enabled foundation models.
Our design process, terminology, and diagrams aim to contribute to future discussions about the relative affordances of AI technology.
- Score: 13.850610205757633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We ideate a future design workflow that involves AI technology. Drawing from
activity and communication theory, we attempt to isolate the new value large AI
models can provide design compared to past technologies. We arrive at three
affordances -- dynamic grounding, constructive negotiation, and sustainable
motivation -- that summarize latent qualities of natural language-enabled
foundation models that, if explicitly designed for, can support the process of
design. Through design fiction, we then imagine a future interface as a
diegetic prototype, the story of Squirrel Game, that demonstrates each of our
three affordances in a realistic usage scenario. Our design process,
terminology, and diagrams aim to contribute to future discussions about the
relative affordances of AI technology with regard to collaborating with human
designers.
Related papers
- Empowering Clients: Transformation of Design Processes Due to Generative AI [1.4003044924094596]
The study reveals that AI can disrupt the ideation phase by enabling clients to engage in the design process through rapid visualization of their own ideas.
Our study shows that while AI can provide valuable feedback on designs, it might fail to generate such designs, allowing for interesting connections to foundations in computer science.
Our study also reveals that there is uncertainty among architects about the interpretative sovereignty of architecture and loss of meaning and identity when AI increasingly takes over authorship in the design process.
arXiv Detail & Related papers (2024-11-22T16:48:15Z) - Diffusion-Based Visual Art Creation: A Survey and New Perspectives [51.522935314070416]
This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives.
Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation.
We aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity.
arXiv Detail & Related papers (2024-08-22T04:49:50Z) - Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - iCONTRA: Toward Thematic Collection Design Via Interactive Concept
Transfer [16.35842298296878]
We introduce iCONTRA, an interactive CONcept TRAnsfer system.
iCONTRA enables both experienced designers and novices to effortlessly explore creative design concepts.
We also propose a zero-shot image editing algorithm, eliminating the need for fine-tuning models.
arXiv Detail & Related papers (2024-03-13T17:48:39Z) - Grasping AI: experiential exercises for designers [8.95562850825636]
We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems.
We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible.
arXiv Detail & Related papers (2023-10-02T15:34:08Z) - Next Steps for Human-Centered Generative AI: A Technical Perspective [107.74614586614224]
We propose next-steps for Human-centered Generative AI (HGAI)
By identifying these next-steps, we intend to draw interdisciplinary research teams to pursue a coherent set of emergent ideas in HGAI.
arXiv Detail & Related papers (2023-06-27T19:54:30Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - TEMOS: Generating diverse human motions from textual descriptions [53.85978336198444]
We address the problem of generating diverse 3D human motions from textual descriptions.
We propose TEMOS, a text-conditioned generative model leveraging variational autoencoder (VAE) training with human motion data.
We show that TEMOS framework can produce both skeleton-based animations as in prior work, as well more expressive SMPL body motions.
arXiv Detail & Related papers (2022-04-25T14:53:06Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Human in the Loop for Machine Creativity [0.0]
We conceptualize existing and future human-in-the-loop (HITL) approaches for creative applications.
We examine and speculate on long term implications for models, interfaces, and machine creativity.
We envision multimodal HITL processes, where texts, visuals, sounds, and other information are coupled together, with automated analysis of humans and environments.
arXiv Detail & Related papers (2021-10-07T15:42:18Z) - Future Urban Scenes Generation Through Vehicles Synthesis [90.1731992199415]
We propose a deep learning pipeline to predict the visual future appearance of an urban scene.
We follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently.
We show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow.
arXiv Detail & Related papers (2020-07-01T08:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.