Deep Stable Multi-Interest Learning for Out-of-distribution Sequential
Recommendation
- URL: http://arxiv.org/abs/2304.05615v1
- Date: Wed, 12 Apr 2023 05:13:54 GMT
- Title: Deep Stable Multi-Interest Learning for Out-of-distribution Sequential
Recommendation
- Authors: Qiang Liu, Zhaocheng Liu, Zhenxi Zhu, Shu Wu, Liang Wang
- Abstract summary: We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL), which attempts to de-correlate the extracted interests in the model.
DESMIL incorporates a weighted correlation estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with which training samples are weighted, to minimize the correlations among extracted interests.
- Score: 21.35873758251157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, multi-interest models, which extract interests of a user as
multiple representation vectors, have shown promising performances for
sequential recommendation. However, none of existing multi-interest
recommendation models consider the Out-Of-Distribution (OOD) generalization
problem, in which interest distribution may change. Considering multiple
interests of a user are usually highly correlated, the model has chance to
learn spurious correlations between noisy interests and target items. Once the
data distribution changes, the correlations among interests may also change,
and the spurious correlations will mislead the model to make wrong predictions.
To tackle with above OOD generalization problem, we propose a novel
multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL),
which attempts to de-correlate the extracted interests in the model, and thus
spurious correlations can be eliminated. DESMIL applies an attentive module to
extract multiple interests, and then selects the most important one for making
final predictions. Meanwhile, DESMIL incorporates a weighted correlation
estimation loss based on Hilbert-Schmidt Independence Criterion (HSIC), with
which training samples are weighted, to minimize the correlations among
extracted interests. Extensive experiments have been conducted under both OOD
and random settings, and up to 36.8% and 21.7% relative improvements are
achieved respectively.
Related papers
- Bayesian Joint Additive Factor Models for Multiview Learning [7.254731344123118]
A motivating application arises in the context of precision medicine where multi-omics data are collected to correlate with clinical outcomes.
We propose a joint additive factor regression model (JAFAR) with a structured additive design, accounting for shared and view-specific components.
Prediction of time-to-labor onset from immunome, metabolome, and proteome data illustrates performance gains against state-of-the-art competitors.
arXiv Detail & Related papers (2024-06-02T15:35:45Z) - Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles [95.49699178874683]
We propose DiffDiv, an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs)
We show that DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features.
We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals.
arXiv Detail & Related papers (2023-11-23T15:47:33Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Mitigating Spurious Correlations in Multi-modal Models during
Fine-tuning [18.45898471459533]
Spurious correlations that degrade model generalization or lead the model to be right for the wrong reasons are one of the main robustness concerns for real-world deployments.
This paper proposes a novel approach to address spurious correlations during fine-tuning for a given domain of interest.
arXiv Detail & Related papers (2023-04-08T05:20:33Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Improving Multi-Interest Network with Stable Learning [13.514488368734776]
We propose a novel multi-interest network, named DEep Stable Multi-Interest Learning (DESMIL)
DESMIL tries to eliminate the influence of subtle dependencies among captured interests via learning weights for training samples.
We conduct extensive experiments on public recommendation datasets, a large-scale industrial dataset and the synthetic datasets.
arXiv Detail & Related papers (2022-07-14T07:49:28Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Multiple Interest and Fine Granularity Network for User Modeling [3.508126539399186]
User modeling plays a fundamental role in industrial recommender systems, either in the matching stage and the ranking stage, in terms of both the customer experience and business revenue.
Most existing deep-learning based approaches exploit item-ids and category-ids but neglect fine-grained features like color and mate-rial, which hinders modeling the fine granularity of users' interests.
We present Multiple interest and Fine granularity Net-work (MFN), which tackle users' multiple and fine-grained interests and construct the model from both the similarity relationship and the combination relationship among the users' multiple interests.
arXiv Detail & Related papers (2021-12-05T15:12:08Z) - MRIF: Multi-resolution Interest Fusion for Recommendation [0.0]
This paper presents a multi-resolution interest fusion model (MRIF) that takes both properties of users' interests into consideration.
The proposed model is capable to capture the dynamic changes in users' interests at different temporal-ranges, and provides an effective way to combine a group of multi-resolution user interests to make predictions.
arXiv Detail & Related papers (2020-07-08T02:32:15Z) - Decorrelated Clustering with Data Selection Bias [55.91842043124102]
We propose a novel Decorrelation regularized K-Means algorithm (DCKM) for clustering with data selection bias.
Our DCKM algorithm achieves significant performance gains, indicating the necessity of removing unexpected feature correlations induced by selection bias.
arXiv Detail & Related papers (2020-06-29T08:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.