Learn to Preserve Personality: Federated Foundation Models in Recommendations
- URL: http://arxiv.org/abs/2506.11563v1
- Date: Fri, 13 Jun 2025 08:17:07 GMT
- Title: Learn to Preserve Personality: Federated Foundation Models in Recommendations
- Authors: Zhiwei Li, Guodong Long, Chunxu Zhang, Honglei Zhang, Jing Jiang, Chengqi Zhang,
- Abstract summary: Federated foundation models (FFM) provide a structural means to decouple shared knowledge from individual specific adaptations.<n>We envision future personal agents, powered by personalized adaptive FMs, guiding user decisions on content.
- Score: 36.85043146335166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A core learning challenge for existed Foundation Models (FM) is striking the tradeoff between generalization with personalization, which is a dilemma that has been highlighted by various parameter-efficient adaptation techniques. Federated foundation models (FFM) provide a structural means to decouple shared knowledge from individual specific adaptations via decentralized processes. Recommendation systems offer a perfect testbed for FFMs, given their reliance on rich implicit feedback reflecting unique user characteristics. This position paper discusses a novel learning paradigm where FFMs not only harness their generalization capabilities but are specifically designed to preserve the integrity of user personality, illustrated thoroughly within the recommendation contexts. We envision future personal agents, powered by personalized adaptive FMs, guiding user decisions on content. Such an architecture promises a user centric, decentralized system where individuals maintain control over their personalized agents.
Related papers
- Who Should I Listen To? Adaptive Collaboration in Personalized Federated Learning [6.427792270209119]
We propose an approach based on adaptive collaboration, where clients decide adaptively not only how much to rely on others, but also whom to trust.<n>We instantiate this principle in FEDMOSAIC, a federated co-training method in which clients exchange predictions over a shared unlabeled dataset.<n>Our results demonstrate the potential of data-aware collaboration for robust and effective personalization.
arXiv Detail & Related papers (2025-06-30T20:53:01Z) - Towards Artificial General or Personalized Intelligence? A Survey on Foundation Models for Personalized Federated Intelligence [59.498447610998525]
The rise of large language models (LLMs) has reshaped the artificial intelligence landscape.<n>This paper focuses on adapting these powerful models to meet the specific needs and preferences of users while maintaining privacy and efficiency.<n>We propose personalized federated intelligence (PFI), which integrates the privacy-preserving advantages of federated learning with the zero-shot generalization capabilities of FMs.
arXiv Detail & Related papers (2025-05-11T08:57:53Z) - Federated Adapter on Foundation Models: An Out-Of-Distribution Approach [42.31209296544899]
We propose a privacy-preserving approach to fine-tune Federated Foundation Models (FedFM)<n>FedOA employs adapter-based parameter-tuning methods for efficacy and introduces distance-based regularization to distributions and guarantee OOD generalization for each client.
arXiv Detail & Related papers (2025-05-02T07:33:00Z) - LoRe: Personalizing LLMs via Low-Rank Reward Modeling [47.12507639759984]
We introduce a novel framework that leverages low-rank preference modeling to efficiently learn and generalize user-specific reward functions.<n>We validate our method on multiple preference datasets, demonstrating superior generalization to unseen users and improved accuracy in preference prediction tasks.
arXiv Detail & Related papers (2025-04-20T01:16:24Z) - Personalized Recommendation Models in Federated Settings: A Survey [32.46278932694137]
Federated recommender systems (FedRecSys) have emerged as a pivotal solution for privacy-aware recommendations.<n>Current research efforts predominantly concentrate on adapting traditional recommendation architectures to federated environments.<n>User personalization modeling, which is essential for capturing heterogeneous preferences in this decentralized and non-IID data setting, remains underexplored.
arXiv Detail & Related papers (2025-03-10T09:20:20Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Personalized Federated Learning with Adaptive Feature Aggregation and Knowledge Transfer [0.0]
Federated Learning (FL) is popular as a privacy-preserving machine learning paradigm for generating a single model on decentralized data.
We propose a new method personalized Federated learning with Adaptive Feature Aggregation and Knowledge Transfer (FedAFK)
We conduct extensive experiments on three datasets in two widely-used heterogeneous settings and show the superior performance of our proposed method over thirteen state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-19T11:32:39Z) - Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.<n>Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector.<n>This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - Personalized Language Modeling from Personalized Human Feedback [45.16986573937782]
Personalized large language models (LLMs) are designed to tailor responses to individual user preferences.<n>We propose Personalized-RLHF, an efficient framework that utilizes a lightweight user model to capture individual user preferences.<n>We show that personalized LLMs trained using P-RLHF generate responses that are more closely aligned with individual user preferences.
arXiv Detail & Related papers (2024-02-06T04:18:58Z) - FediOS: Decoupling Orthogonal Subspaces for Personalization in
Feature-skew Federated Learning [6.076894295435773]
In personalized federated learning (pFL), clients may have heterogeneous (also known as non-IID) data.
In FediOS, we reformulate the decoupling into two feature extractors (generic and personalized) and one shared prediction head.
In addition, a shared prediction head is trained to balance the importance of generic and personalized features during inference.
arXiv Detail & Related papers (2023-11-30T13:50:38Z) - Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings [41.98633628526484]
Mixture-of-Experts (MoEs) achieve scalability by dynamically activating subsets of their components.<n>Motivated by inference costs and data heterogeneity, we study how joint training of gating functions and experts can allocate domain-specific expertise.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.