WeiPS: a symmetric fusion model framework for large-scale online
learning
- URL: http://arxiv.org/abs/2011.11983v1
- Date: Tue, 24 Nov 2020 09:25:39 GMT
- Title: WeiPS: a symmetric fusion model framework for large-scale online
learning
- Authors: Xiang Yu, Fuping Chu, Junqi Wu, Bo Huang
- Abstract summary: We propose a symmetric fusion online learning system framework called WeiPS, which integrates model training and model inference.
Specifically, WeiPS carries out second level model deployment by streaming update mechanism to satisfy the consistency requirement.
It uses multi-level fault tolerance and real-time domino degradation to achieve high availability requirement.
- Score: 6.88870384575896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recommendation system is an important commercial application of machine
learning, where billions of feed views in the information flow every day. In
reality, the interaction between user and item usually makes user's interest
changing over time, thus many companies (e.g. ByteDance, Baidu, Alibaba, and
Weibo) employ online learning as an effective way to quickly capture user
interests. However, hundreds of billions of model parameters present online
learning with challenges for real-time model deployment. Besides, model
stability is another key point for online learning. To this end, we design and
implement a symmetric fusion online learning system framework called WeiPS,
which integrates model training and model inference. Specifically, WeiPS
carries out second level model deployment by streaming update mechanism to
satisfy the consistency requirement. Moreover, it uses multi-level fault
tolerance and real-time domino degradation to achieve high availability
requirement.
Related papers
- MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [94.61039892220037]
We present a novel immersion-aware model trading framework that incentivizes metaverse users (MUs) to contribute learning models for augmented reality (AR) services in the vehicular metaverse.
Considering dynamic network conditions and privacy concerns, we formulate the reward decisions of MSPs as a multi-agent Markov decision process.
Experimental results demonstrate that the proposed framework can effectively provide higher-value models for object detection and classification in AR services on real AR-related vehicle datasets.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Do Not Wait: Learning Re-Ranking Model Without User Feedback At Serving Time in E-Commerce [16.316227411757797]
We propose a novel extension of online learning methods for re-ranking modeling, which we term LAST.
It circumvents the requirement of user feedback by using a surrogate model to provide the instructional signal needed to steer model improvement.
LAST can be seamlessly integrated into existing online learning systems to create a more adaptive and responsive recommendation experience.
arXiv Detail & Related papers (2024-06-20T05:15:48Z) - Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - DiffMM: Multi-Modal Diffusion Model for Recommendation [19.43775593283657]
We propose a novel multi-modal graph diffusion model for recommendation called DiffMM.
Our framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning.
arXiv Detail & Related papers (2024-06-17T17:35:54Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - OneNet: Enhancing Time Series Forecasting Models under Concept Drift by
Online Ensembling [65.93805881841119]
We propose textbfOnline textbfensembling textbfNetwork (OneNet) to address the concept drifting problem.
OneNet reduces online forecasting error by more than $mathbf50%$ compared to the State-Of-The-Art (SOTA) method.
arXiv Detail & Related papers (2023-09-22T06:59:14Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Incremental Learning for Personalized Recommender Systems [8.020546404087922]
We present an incremental learning solution to provide both the training efficiency and the model quality.
The solution is deployed in LinkedIn and directly applicable to industrial scale recommender systems.
arXiv Detail & Related papers (2021-08-13T04:21:21Z) - Lambda Learner: Fast Incremental Learning on Data Streams [5.543723668681475]
We propose a new framework for training models by incremental updates in response to mini-batches from data streams.
We show that the resulting model of our framework closely estimates a periodically updated model trained on offline data and outperforms it when model updates are time-sensitive.
We present a large-scale deployment on the sponsored content platform for a large social network.
arXiv Detail & Related papers (2020-10-11T04:00:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.