Copyright in Generative Deep Learning
- URL: http://arxiv.org/abs/2105.09266v1
- Date: Wed, 19 May 2021 17:22:47 GMT
- Title: Copyright in Generative Deep Learning
- Authors: Giorgio Franceschelli and Mirco Musolesi
- Abstract summary: We consider a set of key questions in the area of generative deep learning for the arts.
We try to answer these questions considering the law in force in both US and EU.
- Score: 3.689181056530984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine-generated artworks are now part of the contemporary art scene: they
are attracting significant investments and they are presented in exhibitions
together with those created by human artists. These artworks are mainly based
on generative deep learning techniques. Also given their success, several legal
problems arise when working with these techniques.
In this article we consider a set of key questions in the area of generative
deep learning for the arts. Is it possible to use copyrighted works as training
set for generative models? How do we legally store their copies in order to
perform the training process? And then, who (if someone) will own the copyright
on the generated data? We try to answer these questions considering the law in
force in both US and EU and the future alternatives, trying to define a set of
guidelines for artists and developers working on deep learning generated art.
Related papers
- ArtistAuditor: Auditing Artist Style Pirate in Text-to-Image Generation Models [61.55816738318699]
We propose a novel method for data-use auditing in the text-to-image generation model.
ArtistAuditor employs a style extractor to obtain the multi-granularity style representations and treats artworks as samplings of an artist's style.
The experimental results on six combinations of models and datasets show that ArtistAuditor can achieve high AUC values.
arXiv Detail & Related papers (2025-04-17T16:15:38Z) - IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.
This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.
We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Art-Free Generative Models: Art Creation Without Graphic Art Knowledge [50.60063523054282]
We propose a text-to-image generation model trained without access to art-related content.
We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles.
arXiv Detail & Related papers (2024-11-29T18:59:01Z) - How Many Van Goghs Does It Take to Van Gogh? Finding the Imitation Threshold [50.33428591760124]
We study the relationship between a concept's frequency in the training dataset and the ability of a model to imitate it.
We propose an efficient approach that estimates the imitation threshold without incurring the colossal cost of training multiple models from scratch.
arXiv Detail & Related papers (2024-10-19T06:28:14Z) - At the edge of a generative cultural precipice [1.688134675717698]
Since NFTs and large generative models have been publicly available, artists have seen their jobs threatened and stolen.
generative models are trained using human-produced content to better guide the style and themes they can produce.
Inspired by recent work in generative models, we wish to tell a cautionary tale and ask what will happen to the visual arts if generative models continue on the path to be trained solely on generated content.
arXiv Detail & Related papers (2024-04-30T23:26:24Z) - ©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model [71.47762442337948]
State-of-the-art models create high-quality content without crediting original creators.
We propose the copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination.
Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different copyright plug-ins.
arXiv Detail & Related papers (2024-04-18T07:48:00Z) - Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models [47.19481598385283]
ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
arXiv Detail & Related papers (2024-04-11T17:59:43Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - Foregrounding Artist Opinions: A Survey Study on Transparency, Ownership, and Fairness in AI Generative Art [0.0]
Generative AI tools are used to create art-like outputs and sometimes aid in the creative process.
We surveyed 459 artists to investigate tension between artists' opinions on Generative AI art's potential utility and harm.
arXiv Detail & Related papers (2024-01-27T20:22:46Z) - A Dataset and Benchmark for Copyright Infringement Unlearning from Text-to-Image Diffusion Models [52.49582606341111]
Copyright law confers creators the exclusive rights to reproduce, distribute, and monetize their creative works.
Recent progress in text-to-image generation has introduced formidable challenges to copyright enforcement.
We introduce a novel pipeline that harmonizes CLIP, ChatGPT, and diffusion models to curate a dataset.
arXiv Detail & Related papers (2024-01-04T11:14:01Z) - Studying Artist Sentiments around AI-generated Artwork [25.02527831382343]
We interviewed 7 artists and analyzed public posts from artists on social media platforms Reddit, Twitter and Artstation.
We report artists' main concerns and hopes around AI-generated artwork, informing a way forward for inclusive development of these tools.
arXiv Detail & Related papers (2023-11-22T22:44:02Z) - Measuring the Success of Diffusion Models at Imitating Human Artists [7.007492782620398]
We show how to measure a model's ability to imitate specific artists.
We use Contrastive Language-Image Pretrained (CLIP) encoders to classify images in a zero-shot fashion.
We also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability.
arXiv Detail & Related papers (2023-07-08T18:31:25Z) - Artistic Strategies to Guide Neural Networks [0.0]
This paper explores the potentials and limits of current AI technology, in the context of image, text, form and translation of semiotic spaces.
In a relatively short time, the generation of high-resolution images and 3D objects has been achieved.
Yet again, we see how artworks act as catalysts for technology development.
arXiv Detail & Related papers (2023-07-06T22:57:10Z) - Foundation Models and Fair Use [96.04664748698103]
In the U.S. and other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine.
In this work, we survey the potential risks of developing and deploying foundation models based on copyrighted content.
We discuss technical mitigations that can help foundation models stay in line with fair use.
arXiv Detail & Related papers (2023-03-28T03:58:40Z) - Can There be Art Without an Artist? [1.2691047660244335]
Generative AI based art has proliferated in the past year.
In this paper, we explore how Generative Models have impacted artistry.
We posit that if deployed responsibly, AI generative models have the possibility of being a positive, new modality in art.
arXiv Detail & Related papers (2022-09-16T01:23:19Z) - Biases in Generative Art -- A Causal Look from the Lens of Art History [3.198144010381572]
We investigate biases in the generative art AI pipeline from those that can originate due to improper problem formulation to those related to algorithm design.
We highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases.
arXiv Detail & Related papers (2020-10-26T00:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.