FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model
- URL: http://arxiv.org/abs/2506.23210v2
- Date: Wed, 16 Jul 2025 11:06:41 GMT
- Title: FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model
- Authors: Taehwan Yoon, Bongjun Choi,
- Abstract summary: Federated learning(FL) is used for distributed scenarios to train artificial intelligence(AI) models.<n>In federated learning scenario, the server generally never knows about users' data.<n>We propose reference model-based federated learning for optimal fine-tuning.
- Score: 1.5028783379278987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning(FL) is used for distributed scenarios to train artificial intelligence(AI) models while ensuring users' privacy. In federated learning scenario, the server generally never knows about users' data. This type of concept makes the AI training process efficient in terms of data privacy. However, regarding model performance, federated AI models may not sufficiently satisfy AI users' expectations. Furthermore, AI users have a wide range of different needs. It is not easy to satisfy the whole users needs. These types of issues can be addressed through AI model optimization, fine-tuning, or personalization to achieve optimal model performance. To address model optimization challenges, we propose reference model-based federated learning for optimal fine-tuning, which overcomes catastrophic forgetting in each round. This method is derived from Bayesian parameter-efficient transfer learning, which includes an optimal proximal term and utilizes a reference model that incorporates previous model parameters. As a result, this method achieves both high model performance and clients' low computing cost.
Related papers
- Intention-Conditioned Flow Occupancy Models [69.79049994662591]
Large-scale pre-training has fundamentally changed how machine learning research is done today.<n>Applying this same framework to reinforcement learning is appealing because it offers compelling avenues for addressing core challenges in RL.<n>Recent advances in generative AI have provided new tools for modeling highly complex distributions.
arXiv Detail & Related papers (2025-06-10T15:27:46Z) - Pre-training for Recommendation Unlearning [14.514770044236375]
UnlearnRec is a model-agnostic pre-training paradigm that prepares systems for efficient unlearning operations.<n>Our method delivers exceptional unlearning effectiveness while providing more than 10x speedup compared to retraining approaches.
arXiv Detail & Related papers (2025-05-28T17:57:11Z) - Not All Clients Are Equal: Personalized Federated Learning on Heterogeneous Multi-Modal Clients [52.14230635007546]
Foundation models have shown remarkable capabilities across diverse multi-modal tasks, but their centralized training raises privacy concerns and induces high transmission costs.<n>For the growing demand for personalizing AI models for different user purposes, personalized federated learning (PFL) has emerged.<n>PFL allows each client to leverage the knowledge of other clients for further adaptation to individual user preferences, again without the need to share data.
arXiv Detail & Related papers (2025-05-20T09:17:07Z) - Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient [9.519619751861333]
We propose a state space model (SSM)-based world model, Drama, that achieves $O(n)$ memory and computational complexity.<n>We also introduce a novel sampling method to mitigate the suboptimality caused by an incorrect world model in the early training stages.<n>Drama is accessible and trainable on off-the-shelf hardware, such as a standard laptop.
arXiv Detail & Related papers (2024-10-11T15:10:40Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - CoDream: Exchanging dreams instead of models for federated aggregation
with heterogeneous models [8.85591781936764]
We present a novel framework called CoDream, where clients collaboratively optimize randomly data.
Our key insight is that jointly optimizing this data can effectively capture the properties of the global data distribution.
We empirically validate CoDream on standard FL tasks, demonstrating competitive performance despite not sharing model parameters.
arXiv Detail & Related papers (2024-02-25T03:07:32Z) - Fairer and More Accurate Tabular Models Through NAS [14.147928131445852]
We propose using multi-objective Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) in the first application to the very challenging domain of tabular data.
We show that models optimized solely for accuracy with NAS often fail to inherently address fairness concerns.
We produce architectures that consistently dominate state-of-the-art bias mitigation methods either in fairness, accuracy or both.
arXiv Detail & Related papers (2023-10-18T17:56:24Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [47.432215933099016]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.<n>This creates a barrier to fusing knowledge across individual models to yield a better single model.<n>We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.