Imagining a Future of Designing with AI: Dynamic Grounding, Constructive
Negotiation, and Sustainable Motivation
- URL: http://arxiv.org/abs/2402.07342v1
- Date: Mon, 12 Feb 2024 00:20:43 GMT
- Title: Imagining a Future of Designing with AI: Dynamic Grounding, Constructive
Negotiation, and Sustainable Motivation
- Authors: Priyan Vaithilingam, Ian Arawjo, Elena L. Glassman
- Abstract summary: We aim to isolate the new value large AI models can provide design compared to past technologies.
We arrive at three affordances that summarize latent qualities of natural language-enabled foundation models.
Our design process, terminology, and diagrams aim to contribute to future discussions about the relative affordances of AI technology.
- Score: 13.850610205757633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We ideate a future design workflow that involves AI technology. Drawing from
activity and communication theory, we attempt to isolate the new value large AI
models can provide design compared to past technologies. We arrive at three
affordances -- dynamic grounding, constructive negotiation, and sustainable
motivation -- that summarize latent qualities of natural language-enabled
foundation models that, if explicitly designed for, can support the process of
design. Through design fiction, we then imagine a future interface as a
diegetic prototype, the story of Squirrel Game, that demonstrates each of our
three affordances in a realistic usage scenario. Our design process,
terminology, and diagrams aim to contribute to future discussions about the
relative affordances of AI technology with regard to collaborating with human
designers.
Related papers
- Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - Creation of Novel Soft Robot Designs using Generative AI [0.3584072049481527]
We explore the use of generative AI to create 3D models of soft actuators.
In this paper, we create a dataset of over 70 text-shape pairings of soft pneumatic robot actuator designs.
By employing transfer learning and data augmentation techniques, we significantly improve the performance of the diffusion model.
arXiv Detail & Related papers (2024-05-03T02:55:27Z) - iCONTRA: Toward Thematic Collection Design Via Interactive Concept
Transfer [16.35842298296878]
We introduce iCONTRA, an interactive CONcept TRAnsfer system.
iCONTRA enables both experienced designers and novices to effortlessly explore creative design concepts.
We also propose a zero-shot image editing algorithm, eliminating the need for fine-tuning models.
arXiv Detail & Related papers (2024-03-13T17:48:39Z) - Grasping AI: experiential exercises for designers [8.95562850825636]
We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems.
We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible.
arXiv Detail & Related papers (2023-10-02T15:34:08Z) - Beyond Reality: The Pivotal Role of Generative AI in the Metaverse [98.1561456565877]
This paper offers a comprehensive exploration of how generative AI technologies are shaping the Metaverse.
We delve into the applications of text generation models like ChatGPT and GPT-3, which are enhancing conversational interfaces with AI-generated characters.
We also examine the potential of 3D model generation technologies like Point-E and Lumirithmic in creating realistic virtual objects.
arXiv Detail & Related papers (2023-07-28T05:44:20Z) - Next Steps for Human-Centered Generative AI: A Technical Perspective [107.74614586614224]
We propose next-steps for Human-centered Generative AI (HGAI)
By identifying these next-steps, we intend to draw interdisciplinary research teams to pursue a coherent set of emergent ideas in HGAI.
arXiv Detail & Related papers (2023-06-27T19:54:30Z) - Pathway to Future Symbiotic Creativity [76.20798455931603]
We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist to a Machine artist in its own right.
In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations.
We propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle.
arXiv Detail & Related papers (2022-08-18T15:12:02Z) - TEMOS: Generating diverse human motions from textual descriptions [53.85978336198444]
We address the problem of generating diverse 3D human motions from textual descriptions.
We propose TEMOS, a text-conditioned generative model leveraging variational autoencoder (VAE) training with human motion data.
We show that TEMOS framework can produce both skeleton-based animations as in prior work, as well more expressive SMPL body motions.
arXiv Detail & Related papers (2022-04-25T14:53:06Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Human in the Loop for Machine Creativity [0.0]
We conceptualize existing and future human-in-the-loop (HITL) approaches for creative applications.
We examine and speculate on long term implications for models, interfaces, and machine creativity.
We envision multimodal HITL processes, where texts, visuals, sounds, and other information are coupled together, with automated analysis of humans and environments.
arXiv Detail & Related papers (2021-10-07T15:42:18Z) - Future Urban Scenes Generation Through Vehicles Synthesis [90.1731992199415]
We propose a deep learning pipeline to predict the visual future appearance of an urban scene.
We follow a two stages approach, where interpretable information is included in the loop and each actor is modelled independently.
We show the superiority of this approach over traditional end-to-end scene-generation methods on CityFlow.
arXiv Detail & Related papers (2020-07-01T08:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.