Improving Multi-Interest Network with Stable Learning
- URL: http://arxiv.org/abs/2207.07910v1
- Date: Thu, 14 Jul 2022 07:49:28 GMT
- Title: Improving Multi-Interest Network with Stable Learning
- Authors: Zhaocheng Liu, Yingtao Luo, Di Zeng, Qiang Liu, Daqing Chang, Dongying
Kong, Zhi Chen
- Abstract summary: We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL)
DESMIL tries to eliminate the influence of subtle dependencies among captured interests via learning weights for training samples.
We conduct extensive experiments on public recommendation datasets, a large-scale industrial dataset and the synthetic datasets.
- Score: 13.514488368734776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling users' dynamic preferences from historical behaviors lies at the
core of modern recommender systems. Due to the diverse nature of user
interests, recent advances propose the multi-interest networks to encode
historical behaviors into multiple interest vectors. In real scenarios, the
corresponding items of captured interests are usually retrieved together to get
exposure and collected into training data, which produces dependencies among
interests. Unfortunately, multi-interest networks may incorrectly concentrate
on subtle dependencies among captured interests. Misled by these dependencies,
the spurious correlations between irrelevant interests and targets are
captured, resulting in the instability of prediction results when training and
test distributions do not match. In this paper, we introduce the widely used
Hilbert-Schmidt Independence Criterion (HSIC) to measure the degree of
independence among captured interests and empirically show that the continuous
increase of HSIC may harm model performance. Based on this, we propose a novel
multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL),
which tries to eliminate the influence of subtle dependencies among captured
interests via learning weights for training samples and make model concentrate
more on underlying true causation. We conduct extensive experiments on public
recommendation datasets, a large-scale industrial dataset and the synthetic
datasets which simulate the out-of-distribution data. Experimental results
demonstrate that our proposed DESMIL outperforms state-of-the-art models by a
significant margin. Besides, we also conduct comprehensive model analysis to
reveal the reason why DESMIL works to a certain extent.
Related papers
- LLM-assisted Explicit and Implicit Multi-interest Learning Framework for Sequential Recommendation [50.98046887582194]
We propose an explicit and implicit multi-interest learning framework to model user interests on two levels: behavior and semantics.
The proposed EIMF framework effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Most Influential Subset Selection: Challenges, Promises, and Beyond [9.479235005673683]
We study the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence.
We conduct a comprehensive analysis of the prevailing approaches in MISS, elucidating their strengths and weaknesses.
We demonstrate that an adaptive version of theses which applies them iteratively, can effectively capture the interactions among samples.
arXiv Detail & Related papers (2024-09-25T20:00:23Z) - Bayesian Joint Additive Factor Models for Multiview Learning [7.254731344123118]
A motivating application arises in the context of precision medicine where multi-omics data are collected to correlate with clinical outcomes.
We propose a joint additive factor regression model (JAFAR) with a structured additive design, accounting for shared and view-specific components.
Prediction of time-to-labor onset from immunome, metabolome, and proteome data illustrates performance gains against state-of-the-art competitors.
arXiv Detail & Related papers (2024-06-02T15:35:45Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Deep Stable Multi-Interest Learning for Out-of-distribution Sequential
Recommendation [21.35873758251157]
We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL), which attempts to de-correlate the extracted interests in the model.
DESMIL incorporates a weighted correlation estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with which training samples are weighted, to minimize the correlations among extracted interests.
arXiv Detail & Related papers (2023-04-12T05:13:54Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Multiple Interest and Fine Granularity Network for User Modeling [3.508126539399186]
User modeling plays a fundamental role in industrial recommender systems, either in the matching stage and the ranking stage, in terms of both the customer experience and business revenue.
Most existing deep-learning based approaches exploit item-ids and category-ids but neglect fine-grained features like color and mate-rial, which hinders modeling the fine granularity of users' interests.
We present Multiple interest and Fine granularity Net-work (MFN), which tackle users' multiple and fine-grained interests and construct the model from both the similarity relationship and the combination relationship among the users' multiple interests.
arXiv Detail & Related papers (2021-12-05T15:12:08Z) - On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study [65.17429512679695]
In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.
Despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.
arXiv Detail & Related papers (2021-06-02T00:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.