CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning
- URL: http://arxiv.org/abs/2502.10705v1
- Date: Sat, 15 Feb 2025 07:33:33 GMT
- Title: CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning
- Authors: Quanmin Wei, Penglin Dai, Wei Li, Bingyi Liu, Xiao Wu,
- Abstract summary: Training a robust collaborative perception model requires collecting sufficient training data that covers all possible collaboration scenarios.<n>Existing methods, such as domain adaptation, have mitigated this issue by exposing the deployment data during the training stage but incur a high training cost.<n>We propose a lightweight framework, CoPEFT, for adapting a trained collaborative perception model to new deployment environments under low-cost conditions.
- Score: 9.161215048625172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent collaborative perception is expected to significantly improve perception performance by overcoming the limitations of single-agent perception through exchanging complementary information. However, training a robust collaborative perception model requires collecting sufficient training data that covers all possible collaboration scenarios, which is impractical due to intolerable deployment costs. Hence, the trained model is not robust against new traffic scenarios with inconsistent data distribution and fundamentally restricts its real-world applicability. Further, existing methods, such as domain adaptation, have mitigated this issue by exposing the deployment data during the training stage but incur a high training cost, which is infeasible for resource-constrained agents. In this paper, we propose a Parameter-Efficient Fine-Tuning-based lightweight framework, CoPEFT, for fast adapting a trained collaborative perception model to new deployment environments under low-cost conditions. CoPEFT develops a Collaboration Adapter and Agent Prompt to perform macro-level and micro-level adaptations separately. Specifically, the Collaboration Adapter utilizes the inherent knowledge from training data and limited deployment data to adapt the feature map to new data distribution. The Agent Prompt further enhances the Collaboration Adapter by inserting fine-grained contextual information about the environment. Extensive experiments demonstrate that our CoPEFT surpasses existing methods with less than 1\% trainable parameters, proving the effectiveness and efficiency of our proposed method.
Related papers
- Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing [58.56017169759816]
We introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time.
Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor.
In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC.
arXiv Detail & Related papers (2025-03-29T06:10:34Z) - CoSDH: Communication-Efficient Collaborative Perception via Supply-Demand Awareness and Intermediate-Late Hybridization [23.958663737034318]
We propose a novel communication-efficient collaborative perception framework based on supply-demand awareness and intermediate-late hybridization.
Experiments on multiple datasets, including both simulated and real-world scenarios, demonstrate that mymethodname achieves state-of-the-art detection accuracy and optimal bandwidth trade-offs.
arXiv Detail & Related papers (2025-03-05T12:02:04Z) - Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data [35.47385526394076]
Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data.
Fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed.
Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model.
arXiv Detail & Related papers (2025-02-27T04:31:34Z) - PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task Adaptation [61.57833648734164]
We propose a novel Parallel Yielding Re-Activation (PYRA) method for training-inference efficient task adaptation.
PYRA outperforms all competing methods under both low compression rate and high compression rate.
arXiv Detail & Related papers (2024-03-14T09:06:49Z) - Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation
Encoding [21.76802204235636]
We propose representation encoding-based federated meta-learning (REFML) for few-shot fault diagnosis.
REFML harnesses the inherent generalization among training clients, effectively transforming it into an advantage for out-of-distribution.
It achieves an increase in accuracy by 2.17%-6.50% when tested on unseen working conditions of the same equipment type and 13.44%-18.33% when tested on totally unseen equipment types.
arXiv Detail & Related papers (2023-10-13T10:48:28Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Exploring Parameter-Efficient Fine-Tuning to Enable Foundation Models in Federated Learning [12.839398408791778]
Federated learning (FL) has emerged as a promising paradigm for enabling the collaborative training of models without centralized access to the raw data on local devices.<n>Recent state-of-the-art pre-trained models are getting more capable but also have more parameters, known as the "Foundation Models"<n>Can we find a solution to enable those strong and readily available pre-trained models in FL to achieve excellent performance while simultaneously reducing the communication burden?<n>Specifically, we systemically evaluate the performance of FedPEFT across a variety of client stability, data distribution, and differential privacy settings.
arXiv Detail & Related papers (2022-10-04T16:08:54Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - AdaptCL: Efficient Collaborative Learning with Dynamic and Adaptive
Pruning [16.785573286753742]
We propose a novel and efficient collaborative learning framework named AdaptCL.
All workers (data holders) achieve approximately identical update time as the fastest worker by equipping them with capability-adapted pruned models.
AdaptCL achieves time savings of more than 41% on average and improves accuracy in a low heterogeneous environment.
arXiv Detail & Related papers (2021-06-27T02:41:19Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.