DynInt: Dynamic Interaction Modeling for Large-scale Click-Through Rate
Prediction
- URL: http://arxiv.org/abs/2301.08139v1
- Date: Tue, 3 Jan 2023 13:01:30 GMT
- Title: DynInt: Dynamic Interaction Modeling for Large-scale Click-Through Rate
Prediction
- Authors: YaChen Yan, Liubo Li
- Abstract summary: Learning feature interactions is the key to success for the large-scale CTR prediction in Ads ranking and recommender systems.
Deep neural network-based models are widely adopted for modeling such problems.
We propose a new model: DynInt, which learns higher-order interactions to be dynamic and data-dependent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning feature interactions is the key to success for the large-scale CTR
prediction in Ads ranking and recommender systems. In industry, deep neural
network-based models are widely adopted for modeling such problems. Researchers
proposed various neural network architectures for searching and modeling the
feature interactions in an end-to-end fashion. However, most methods only learn
static feature interactions and have not fully leveraged deep CTR models'
representation capacity. In this paper, we propose a new model: DynInt. By
extending Polynomial-Interaction-Network (PIN), which learns higher-order
interactions recursively to be dynamic and data-dependent, DynInt further
derived two modes for modeling dynamic higher-order interactions: dynamic
activation and dynamic parameter. In dynamic activation mode, we adaptively
adjust the strength of learned interactions by instance-aware activation gating
networks. In dynamic parameter mode, we re-parameterize the parameters by
different formulations and dynamically generate the parameters by
instance-aware parameter generation networks. Through instance-aware gating
mechanism and dynamic parameter generation, we enable the PIN to model dynamic
interaction for potential industry applications. We implement the proposed
model and evaluate the model performance on real-world datasets. Extensive
experiment results demonstrate the efficiency and effectiveness of DynInt over
state-of-the-art models.
Related papers
- ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - The Underlying Correlated Dynamics in Neural Training [6.385006149689549]
Training of neural networks is a computationally intensive task.
We propose a model based on the correlation of the parameters' dynamics, which dramatically reduces the dimensionality.
This representation enhances the understanding of the underlying training dynamics and can pave the way for designing better acceleration techniques.
arXiv Detail & Related papers (2022-12-18T08:34:11Z) - Learning Interacting Dynamical Systems with Latent Gaussian Process ODEs [13.436770170612295]
We study for the first time uncertainty-aware modeling of continuous-time dynamics of interacting objects.
Our model infers both independent dynamics and their interactions with reliable uncertainty estimates.
arXiv Detail & Related papers (2022-05-24T08:36:25Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Dynamic Parameterized Network for CTR Prediction [6.749659219776502]
We proposed a novel plug-in operation, Dynamic ized Operation (DPO), to learn both explicit and implicit interaction instance-wisely.
We showed that the introduction of DPO into DNN modules and Attention modules can respectively benefit two main tasks in click-through rate (CTR) prediction.
Our Dynamic ized Networks significantly outperforms state-of-the-art methods in the offline experiments on the public dataset and real-world production dataset.
arXiv Detail & Related papers (2021-11-09T08:15:03Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - HyperDynamics: Meta-Learning Object and Agent Dynamics with
Hypernetworks [18.892883695539002]
HyperDynamics is a dynamics meta-learning framework that generates parameters of neural dynamics models.
It outperforms existing models that adapt to environment variations by learning dynamics over high dimensional visual observations.
We show our method matches the performance of an ensemble of separately trained experts, while also being able to generalize well to unseen environment variations at test time.
arXiv Detail & Related papers (2021-03-17T04:48:43Z) - Action-Conditional Recurrent Kalman Networks For Forward and Inverse
Dynamics Learning [17.80270555749689]
Estimating accurate forward and inverse dynamics models is a crucial component of model-based control for robots.
We present two architectures for forward model learning and one for inverse model learning.
Both architectures significantly outperform exist-ing model learning frameworks as well as analytical models in terms of prediction performance.
arXiv Detail & Related papers (2020-10-20T11:28:25Z) - Automated and Formal Synthesis of Neural Barrier Certificates for
Dynamical Models [70.70479436076238]
We introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC)
The approach is underpinned by an inductive framework, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate's validity or generates counter-examples.
The outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine.
arXiv Detail & Related papers (2020-07-07T07:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.