Exploring Crowd Co-creation Scenarios for Sketches
- URL: http://arxiv.org/abs/2005.07328v2
- Date: Fri, 22 May 2020 02:12:21 GMT
- Title: Exploring Crowd Co-creation Scenarios for Sketches
- Authors: Devi Parikh and C. Lawrence Zitnick
- Abstract summary: We study several human-only collaborative co-creation scenarios.
The goal in each scenario is to create a digital sketch using a simple web interface.
We find that settings in which multiple humans iteratively add strokes and vote on the best additions result in the sketches with highest perceived creativity.
- Score: 49.578304437046384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a first step towards studying the ability of human crowds and machines to
effectively co-create, we explore several human-only collaborative co-creation
scenarios. The goal in each scenario is to create a digital sketch using a
simple web interface. We find that settings in which multiple humans
iteratively add strokes and vote on the best additions result in the sketches
with highest perceived creativity (value + novelty). Lack of collaboration
leads to a higher variance in quality and lower novelty or surprise.
Collaboration without voting leads to high novelty but low quality.
Related papers
- CoMix: A Comprehensive Benchmark for Multi-Task Comic Understanding [14.22900011952181]
We introduce a novel benchmark, CoMix, designed to evaluate the multi-task capabilities of models in comic analysis.
Our benchmark comprises three existing datasets with expanded annotations to support multi-task evaluation.
To mitigate the over-representation of manga-style data, we have incorporated a new dataset of carefully selected American comic-style books.
arXiv Detail & Related papers (2024-07-04T00:07:50Z) - UniHuman: A Unified Model for Editing Human Images in the Wild [49.896715833075106]
We propose UniHuman, a unified model that addresses multiple facets of human image editing in real-world settings.
To enhance the model's generation quality and generalization capacity, we leverage guidance from human visual encoders.
In user studies, UniHuman is preferred by the users in an average of 77% of cases.
arXiv Detail & Related papers (2023-12-22T05:00:30Z) - Beyond Domain Gap: Exploiting Subjectivity in Sketch-Based Person
Retrieval [40.257842079152255]
Person re-identification (re-ID) requires densely distributed cameras.
Previous research defines this case using the sketch as sketch re-identification (Sketch re-ID)
We model and investigate it by posing a new dataset with multi-witness descriptions.
It contains over 4,763 sketches and 32,668 photos, making it the largest Sketch re-ID dataset.
arXiv Detail & Related papers (2023-09-15T12:59:01Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - Compositional 3D Human-Object Neural Animation [93.38239238988719]
Human-object interactions (HOIs) are crucial for human-centric scene understanding applications such as human-centric visual generation, AR/VR, and robotics.
In this paper, we address this challenge in HOI animation from a compositional perspective.
We adopt neural human-object deformation to model and render HOI dynamics based on implicit neural representations.
arXiv Detail & Related papers (2023-04-27T10:04:56Z) - Novel View Synthesis of Humans using Differentiable Rendering [50.57718384229912]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2023-03-28T10:48:33Z) - Creative Sketch Generation [48.16835161875747]
We introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations.
We propose DoodlerGAN -- a part-based Generative Adrial Network (GAN) -- to generate unseen compositions of novel part appearances.
Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches.
arXiv Detail & Related papers (2020-11-19T18:57:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.