Efficient Multi-Task Inferencing: Model Merging with Gromov-Wasserstein Feature Alignment
- URL: http://arxiv.org/abs/2503.09774v1
- Date: Wed, 12 Mar 2025 19:20:33 GMT
- Title: Efficient Multi-Task Inferencing: Model Merging with Gromov-Wasserstein Feature Alignment
- Authors: Luyang Fang, Ehsan Latif, Haoran Lu, Yifan Zhou, Ping Ma, Xiaoming Zhai,
- Abstract summary: This paper introduces the Gromov-Wasserstein Scoring Model Merging (GW-SMM) method.<n>It merges models based on feature distribution similarities measured via the Gromov-Wasserstein distance.<n>We validated our approach against human expert knowledge and a GPT-o1-based merging method.
- Score: 7.436562917907035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic scoring of student responses enhances efficiency in education, but deploying a separate neural network for each task increases storage demands, maintenance efforts, and redundant computations. To address these challenges, this paper introduces the Gromov-Wasserstein Scoring Model Merging (GW-SMM) method, which merges models based on feature distribution similarities measured via the Gromov-Wasserstein distance. Our approach begins by extracting features from student responses using individual models, capturing both item-specific context and unique learned representations. The Gromov-Wasserstein distance then quantifies the similarity between these feature distributions, identifying the most compatible models for merging. Models exhibiting the smallest pairwise distances, typically in pairs or trios, are merged by combining only the shared layers preceding the classification head. This strategy results in a unified feature extractor while preserving separate classification heads for item-specific scoring. We validated our approach against human expert knowledge and a GPT-o1-based merging method. GW-SMM consistently outperformed both, achieving a higher micro F1 score, macro F1 score, exact match accuracy, and per-label accuracy. The improvements in micro F1 and per-label accuracy were statistically significant compared to GPT-o1-based merging (p=0.04, p=0.01). Additionally, GW-SMM reduced storage requirements by half without compromising much accuracy, demonstrating its computational efficiency alongside reliable scoring performance.
Related papers
- Robust Markov stability for community detection at a scale learned based on the structure [0.0]
We propose a principled method to select a single robust partition at a suitable scale from the multiple partitions that PyGenStability produces.
Our proposed method combines the Markov stability framework with a pre-trained machine learning model for scale selection.
We show that PyGenStabilityOne (PO) outperforms 25 other algorithms by statistically meaningful margins.
arXiv Detail & Related papers (2025-04-15T21:16:14Z) - Reinforced Model Merging [53.84354455400038]
We present an innovative framework termed Reinforced Model Merging (RMM), which encompasses an environment and agent tailored for merging tasks.
By utilizing data subsets during the evaluation process, we addressed the bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100 times.
arXiv Detail & Related papers (2025-03-27T08:52:41Z) - Harmony in Diversity: Merging Neural Networks with Canonical Correlation Analysis [17.989809995141044]
We propose CCA Merge, which is based on Corre Analysis Analysis.
We show that CCA works significantly better than past methods when more than 2 models are merged.
arXiv Detail & Related papers (2024-07-07T14:21:04Z) - ML-based identification of the interface regions for coupling local and nonlocal models [0.0]
Local-nonlocal coupling approaches combine the computational efficiency of local models and the accuracy of nonlocal models.
This study introduces a machine learning-based approach to automatically detect the regions in which the local and nonlocal models should be used.
arXiv Detail & Related papers (2024-04-23T14:19:36Z) - DiTMoS: Delving into Diverse Tiny-Model Selection on Microcontrollers [34.282971510732736]
We introduce DiTMoS, a novel DNN training and inference framework with a selector-classifiers architecture.
A composition of weak models can exhibit high diversity and the union of them can significantly boost the accuracy upper bound.
We deploy DiTMoS on the Neucleo STM32F767ZI board and evaluate it based on three time-series datasets for human activity recognition, keywords spotting, and emotion recognition.
arXiv Detail & Related papers (2024-03-14T02:11:38Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Cramer Type Distances for Learning Gaussian Mixture Models by Gradient
Descent [0.0]
As of today, few known algorithms can fit or learn Gaussian mixture models.
We propose a distance function called Sliced Cram'er 2-distance for learning general multivariate GMMs.
These features are especially useful for distributional reinforcement learning and Deep Q Networks.
arXiv Detail & Related papers (2023-07-13T13:43:02Z) - Overlap-guided Gaussian Mixture Models for Point Cloud Registration [61.250516170418784]
Probabilistic 3D point cloud registration methods have shown competitive performance in overcoming noise, outliers, and density variations.
This paper proposes a novel overlap-guided probabilistic registration approach that computes the optimal transformation from matched Gaussian Mixture Model (GMM) parameters.
arXiv Detail & Related papers (2022-10-17T08:02:33Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - DeepGMR: Learning Latent Gaussian Mixture Models for Registration [113.74060941036664]
Point cloud registration is a fundamental problem in 3D computer vision, graphics and robotics.
In this paper, we introduce Deep Gaussian Mixture Registration (DeepGMR), the first learning-based registration method.
Our proposed method shows favorable performance when compared with state-of-the-art geometry-based and learning-based registration methods.
arXiv Detail & Related papers (2020-08-20T17:25:16Z) - AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses [97.50616524350123]
We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
arXiv Detail & Related papers (2020-01-15T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.