FedStyle: Style-Based Federated Learning Crowdsourcing Framework for Art Commissions
- URL: http://arxiv.org/abs/2404.16336v1
- Date: Thu, 25 Apr 2024 04:53:43 GMT
- Title: FedStyle: Style-Based Federated Learning Crowdsourcing Framework for Art Commissions
- Authors: Changjuan Ran, Yeting Guo, Fang Liu, Shenglan Cui, Yunfan Ye,
- Abstract summary: FedStyle is a style-based federated learning crowdsourcing framework.
It allows artists to train local style models and share model parameters rather than artworks for collaboration.
It addresses extreme data heterogeneity by having artists learn their abstract style representations and align with the server.
- Score: 3.1676484382068315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The unique artistic style is crucial to artists' occupational competitiveness, yet prevailing Art Commission Platforms rarely support style-based retrieval. Meanwhile, the fast-growing generative AI techniques aggravate artists' concerns about releasing personal artworks to public platforms. To achieve artistic style-based retrieval without exposing personal artworks, we propose FedStyle, a style-based federated learning crowdsourcing framework. It allows artists to train local style models and share model parameters rather than artworks for collaboration. However, most artists possess a unique artistic style, resulting in severe model drift among them. FedStyle addresses such extreme data heterogeneity by having artists learn their abstract style representations and align with the server, rather than merely aggregating model parameters lacking semantics. Besides, we introduce contrastive learning to meticulously construct the style representation space, pulling artworks with similar styles closer and keeping different ones apart in the embedding space. Extensive experiments on the proposed datasets demonstrate the superiority of FedStyle.
Related papers
- IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [89.95303251220734]
We present a training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone.
This is denoted as introspective style attribution (IntroStyle) and demonstrates superior performance to state-of-the-art models for style retrieval.
We also introduce a synthetic dataset of Style Hacks (SHacks) to isolate artistic style and evaluate fine-grained style attribution performance.
arXiv Detail & Related papers (2024-12-19T01:21:23Z) - Learning Artistic Signatures: Symmetry Discovery and Style Transfer [8.288443063900825]
There is no undisputed definition of artistic style.
Style should be thought of as a set of global symmetries that dictate the arrangement of local textures.
We show that by considering both local and global features, using both Lie generators and traditional measures of texture, we can quantitatively capture the stylistic similarity between artists better than with either set of features alone.
arXiv Detail & Related papers (2024-12-05T18:56:23Z) - Art-Free Generative Models: Art Creation Without Graphic Art Knowledge [50.60063523054282]
We propose a text-to-image generation model trained without access to art-related content.
We then introduce a simple yet effective method to learn an art adapter using only a few examples of selected artistic styles.
arXiv Detail & Related papers (2024-11-29T18:59:01Z) - Rethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models [47.19481598385283]
ArtSavant is a tool to determine the unique style of an artist by comparing it to a reference dataset of works from WikiArt.
We then perform a large-scale empirical study to provide quantitative insight on the prevalence of artistic style copying across 3 popular text-to-image generative models.
arXiv Detail & Related papers (2024-04-11T17:59:43Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - ArtFusion: Controllable Arbitrary Style Transfer using Dual Conditional
Latent Diffusion Models [0.0]
Arbitrary Style Transfer (AST) aims to transform images by adopting the style from any selected artwork.
We propose a new approach, ArtFusion, which provides a flexible balance between content and style.
arXiv Detail & Related papers (2023-06-15T17:58:36Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Art Creation with Multi-Conditional StyleGANs [81.72047414190482]
A human artist needs a combination of unique skills, understanding, and genuine intention to create artworks that evoke deep feelings and emotions.
We introduce a multi-conditional Generative Adversarial Network (GAN) approach trained on large amounts of human paintings to synthesize realistic-looking paintings that emulate human art.
arXiv Detail & Related papers (2022-02-23T20:45:41Z) - Demographic Influences on Contemporary Art with Unsupervised Style
Embeddings [25.107166631583212]
contempArt is a collection of paintings and drawings, a detailed graph network based on social connections on Instagram and additional socio-demographic information.
We evaluate three methods suited for generating unsupervised style embeddings of images and correlate them with the remaining data.
arXiv Detail & Related papers (2020-09-30T10:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.