Edge-Assisted Collaborative Fine-Tuning for Multi-User Personalized Artificial Intelligence Generated Content (AIGC)
- URL: http://arxiv.org/abs/2508.04745v1
- Date: Wed, 06 Aug 2025 06:07:24 GMT
- Title: Edge-Assisted Collaborative Fine-Tuning for Multi-User Personalized Artificial Intelligence Generated Content (AIGC)
- Authors: Nan Li, Wanting Yang, Marie Siew, Zehui Xiong, Binbin Chen, Shiwen Mao, Kwok-Yan Lam,
- Abstract summary: Cloud-based solutions aid in computation but often fall short in addressing privacy risks, personalization efficiency, and communication costs.<n>We propose a novel cluster-aware hierarchical federated aggregation framework.<n>We show that the framework achieves accelerated convergence while maintaining practical viability for scalable multi-user personalized AIGC services.
- Score: 38.59865959433328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models (DMs) have emerged as powerful tools for high-quality content generation, yet their intensive computational requirements for inference pose challenges for resource-constrained edge devices. Cloud-based solutions aid in computation but often fall short in addressing privacy risks, personalization efficiency, and communication costs in multi-user edge-AIGC scenarios. To bridge this gap, we first analyze existing edge-AIGC applications in personalized content synthesis, revealing their limitations in efficiency and scalability. We then propose a novel cluster-aware hierarchical federated aggregation framework. Based on parameter-efficient local fine-tuning via Low-Rank Adaptation (LoRA), the framework first clusters clients based on the similarity of their uploaded task requirements, followed by an intra-cluster aggregation for enhanced personalization at the server-side. Subsequently, an inter-cluster knowledge interaction paradigm is implemented to enable hybrid-style content generation across diverse clusters.Building upon federated learning (FL) collaboration, our framework simultaneously trains personalized models for individual users at the devices and a shared global model enhanced with multiple LoRA adapters on the server,enabling efficient edge inference; meanwhile, all prompts for clustering and inference are encoded prior to transmission, thereby further mitigating the risk of plaintext leakage. Our evaluations demonstrate that the framework achieves accelerated convergence while maintaining practical viability for scalable multi-user personalized AIGC services under edge constraints.
Related papers
- Self-Enhanced Image Clustering with Cross-Modal Semantic Consistency [57.961869351897384]
We propose a framework based on cross-modal semantic consistency for efficient image clustering.<n>Our framework first builds a strong foundation via Cross-Modal Semantic Consistency.<n>In the first stage, we train lightweight clustering heads to align with the rich semantics of the pre-trained model.<n>In the second stage, we introduce a Self-Enhanced fine-tuning strategy.
arXiv Detail & Related papers (2025-08-02T08:12:57Z) - Byzantine Resilient Federated Multi-Task Representation Learning [1.6114012813668932]
We propose BR-MTRL, a Byzantine-resilient multi-task representation learning framework that handles faulty or malicious agents.<n>Our approach leverages representation learning through a shared neural network model, where all clients share fixed layers, except for a client-specific final layer.
arXiv Detail & Related papers (2025-03-24T23:26:28Z) - Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association [93.46811590752814]
Hierarchical secure aggregation is motivated by federated learning (FL)<n>In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.<n>We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
arXiv Detail & Related papers (2025-03-06T15:53:37Z) - Privacy-Aware Joint DNN Model Deployment and Partitioning Optimization for Collaborative Edge Inference Services [14.408050197587654]
Edge inference (EI) has emerged as a promising paradigm to address the growing limitations of cloud-based Deep Neural Network (DNN) inference services.<n> deploying DNN models on resource-constrained edge devices introduces additional challenges, including limited/storage resources, dynamic service demands, and heightened privacy risks.<n>This paper presents a novel privacy-aware optimization framework that jointly addresses DNN model deployment, user-server association, and model partitioning.
arXiv Detail & Related papers (2025-02-22T05:27:24Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Anchor Learning with Potential Cluster Constraints for Multi-view Clustering [11.536710289572552]
Anchor-based multi-view clustering (MVC) has received extensive attention due to its efficient performance.<n>We propose a noval method termed Anchor Learning with Potential Cluster Constraints for Multi-view Clustering (ALPC) method.
arXiv Detail & Related papers (2024-12-21T07:43:05Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach [27.86468387141422]
Federated Edge Learning (FEEL) emerges as a pioneering distributed machine learning paradigm for the 6G Hyper-Connectivity.<n>Current FEEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy.<n>We introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates.<n>Experiments show that the proposed framework facilitates FEEL on non-IID datasets with faster convergence rate and higher model accuracy in a limited communication environment.
arXiv Detail & Related papers (2024-06-14T07:22:39Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Federated cINN Clustering for Accurate Clustered Federated Learning [33.72494731516968]
Federated Learning (FL) presents an innovative approach to privacy-preserving distributed machine learning.
We propose the Federated cINN Clustering Algorithm (FCCA) to robustly cluster clients into different groups.
arXiv Detail & Related papers (2023-09-04T10:47:52Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Asynchronous Semi-Decentralized Federated Edge Learning for
Heterogeneous Clients [3.983055670167878]
Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks.
In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training.
arXiv Detail & Related papers (2021-12-09T07:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.