Positive Style Accumulation: A Style Screening and Continuous Utilization Framework for Federated DG-ReID
- URL: http://arxiv.org/abs/2507.16238v1
- Date: Tue, 22 Jul 2025 05:21:00 GMT
- Title: Positive Style Accumulation: A Style Screening and Continuous Utilization Framework for Federated DG-ReID
- Authors: Xin Xu, Chaoyue Ren, Wei Liu, Wenke Huang, Bin Yang, Zhixi Yu, Kui Jiang,
- Abstract summary: FedDG-ReID aims to learn a global server model that can be effectively generalized to source and target domains through distributed source domain data.<n>Existing methods mainly improve the diversity of samples through style transformation.<n>We define styles that are beneficial or harmful to the model's generalization performance as positive or negative styles.
- Score: 25.60095835928136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Federated Domain Generalization for Person re-identification (FedDG-ReID) aims to learn a global server model that can be effectively generalized to source and target domains through distributed source domain data. Existing methods mainly improve the diversity of samples through style transformation, which to some extent enhances the generalization performance of the model. However, we discover that not all styles contribute to the generalization performance. Therefore, we define styles that are beneficial or harmful to the model's generalization performance as positive or negative styles. Based on this, new issues arise: How to effectively screen and continuously utilize the positive styles. To solve these problems, we propose a Style Screening and Continuous Utilization (SSCU) framework. Firstly, we design a Generalization Gain-guided Dynamic Style Memory (GGDSM) for each client model to screen and accumulate generated positive styles. Meanwhile, we propose a style memory recognition loss to fully leverage the positive styles memorized by Memory. Furthermore, we propose a Collaborative Style Training (CST) strategy to make full use of positive styles. Unlike traditional learning strategies, our approach leverages both newly generated styles and the accumulated positive styles stored in memory to train client models on two distinct branches. This training strategy is designed to effectively promote the rapid acquisition of new styles by the client models, and guarantees the continuous and thorough utilization of positive styles, which is highly beneficial for the model's generalization performance. Extensive experimental results demonstrate that our method outperforms existing methods in both the source domain and the target domain.
Related papers
- Style Evolving along Chain-of-Thought for Unknown-Domain Object Detection [35.35239718038119]
A task of Single-Domain Generalized Object Detection (Single-DGOD) is proposed, aiming to generalize a detector to multiple unknown domains never seen before during training.<n>We propose a new method, i.e., Style Evolving along Chain-of-Thought, which aims to progressively integrate and expand style information along the chain of thought.
arXiv Detail & Related papers (2025-03-13T02:14:10Z) - Style-Pro: Style-Guided Prompt Learning for Generalizable Vision-Language Models [5.492174268132387]
Style-Pro is a novel prompt learning framework that mitigates overfitting and preserves the zero-shot generalization capabilities of CLIP.
Style-Pro consistently surpasses state-of-the-art methods in various settings, including base-to-new generalization, cross-dataset transfer, and domain generalization.
arXiv Detail & Related papers (2024-11-25T00:20:53Z) - HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced
Diffusion Models [84.12784265734238]
The goal of Arbitrary Style Transfer (AST) is injecting the artistic features of a style reference into a given image/video.
We propose HiCAST, which is capable of explicitly customizing the stylization results according to various source of semantic clues.
A novel learning objective is leveraged for video diffusion model training, which significantly improve cross-frame temporal consistency.
arXiv Detail & Related papers (2024-01-11T12:26:23Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Federated Domain Generalization for Image Recognition via Cross-Client
Style Transfer [60.70102634957392]
Domain generalization (DG) has been a hot topic in image recognition, with a goal to train a general model that can perform well on unseen domains.
In this paper, we propose a novel domain generalization method for image recognition through cross-client style transfer (CCST) without exchanging data samples.
Our method outperforms recent SOTA DG methods on two DG benchmarks (PACS, OfficeHome) and a large-scale medical image dataset (Camelyon17) in the FL setting.
arXiv Detail & Related papers (2022-10-03T13:15:55Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Feature-based Style Randomization for Domain Generalization [27.15070576861912]
Domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaptions.
This paper develops a simple yet effective feature-based style randomization module to achieve feature-level augmentation.
Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way.
arXiv Detail & Related papers (2021-06-06T16:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.