RMBRec: Robust Multi-Behavior Recommendation towards Target Behaviors
- URL: http://arxiv.org/abs/2601.08705v3
- Date: Wed, 21 Jan 2026 17:02:16 GMT
- Title: RMBRec: Robust Multi-Behavior Recommendation towards Target Behaviors
- Authors: Miaomiao Cai, Zhijie Zhang, Junfeng Fang, Zhiyong Cheng, Xiang Wang, Meng Wang,
- Abstract summary: We propose Robust Multi-Behavior Recommendation towards Target Behaviors (RMBRec)<n>RMBRec is a robust multi-behavior recommendation framework grounded in an information-theoretic robustness principle.<n>We show that RMBRec outperforms state-of-the-art methods in accuracy and maintains remarkable stability under various noise perturbations.
- Score: 26.88506691092044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-behavior recommendation faces a critical challenge in practice: auxiliary behaviors (e.g., clicks, carts) are often noisy, weakly correlated, or semantically misaligned with the target behavior (e.g., purchase), which leads to biased preference learning and suboptimal performance. While existing methods attempt to fuse these heterogeneous signals, they inherently lack a principled mechanism to ensure robustness against such behavioral inconsistency. In this work, we propose Robust Multi-Behavior Recommendation towards Target Behaviors (RMBRec), a robust multi-behavior recommendation framework grounded in an information-theoretic robustness principle. We interpret robustness as a joint process of maximizing predictive information while minimizing its variance across heterogeneous behavioral environments. Under this perspective, the Representation Robustness Module (RRM) enhances local semantic consistency by maximizing the mutual information between users' auxiliary and target representations, whereas the Optimization Robustness Module (ORM) enforces global stability by minimizing the variance of predictive risks across behaviors, which is an efficient approximation to invariant risk minimization. This local-global collaboration bridges representation purification and optimization invariance in a theoretically coherent way. Extensive experiments on three real-world datasets demonstrate that RMBRec not only outperforms state-of-the-art methods in accuracy but also maintains remarkable stability under various noise perturbations. For reproducibility, our code is available at https://github.com/miaomiao-cai2/RMBRec/.
Related papers
- From Agnostic to Specific: Latent Preference Diffusion for Multi-Behavior Sequential Recommendation [28.437926520491445]
Multi-behavior sequential recommendation (MBSR) aims to learn the dynamic and heterogeneous interactions of users' multi-behavior sequences.<n>Recent concerns are shifting from behavior-fixed to behavior-specific recommendation.<n>We propose textbfFatsMB, a framework based diffusion model that guides preference generation.
arXiv Detail & Related papers (2026-02-26T15:48:09Z) - HiFIRec: Towards High-Frequency yet Low-Intention Behaviors for Multi-Behavior Recommendation [10.558247582357783]
HiFIRec is a novel multi-behavior recommendation method.<n>It corrects the effect of high-frequency yet low-intention behaviors by differential behavior modeling.<n>Experiments on two benchmarks show that HiFIRec relatively improves HR@10 by 4.21%-6.81% over several state-of-the-art methods.
arXiv Detail & Related papers (2025-09-30T04:20:45Z) - MGSC: A Multi-granularity Consistency Framework for Robust End-to-end Asr [0.0]
We introduce the Multi-Granularity Soft Consistency framework, a model-agnostic, plug-and-play module that enforces internal self-consistency.<n>Cru-cially, our work is the first to uncover a powerful synergy between these two consistency granularities.<n>Our work demonstrates that enforcing in-ternal consistency is a crucial step towards building more robust and trust-worthy AI.
arXiv Detail & Related papers (2025-08-20T09:51:49Z) - I$^3$-MRec: Invariant Learning with Information Bottleneck for Incomplete Modality Recommendation [56.55935146424585]
We introduce textbfI$3$-MRec, which learns with textbfInformation bottleneck principle for textbfIncomplete textbfModality textbfRecommendation.<n>By treating each modality as a distinct semantic environment, I$3$-MRec employs invariant risk minimization (IRM) to learn preference-oriented representations.<n>I$3$-MRec consistently outperforms existing state-of-the-art MRS methods across various modality-missing scenarios
arXiv Detail & Related papers (2025-08-06T09:29:50Z) - Online Robust Multi-Agent Reinforcement Learning under Model Uncertainties [10.054572105379425]
Well-trained multi-agent systems can fail when deployed in real-world environments.<n>DRMGs enhance system resilience by optimizing for worst-case performance over a defined set of environmental uncertainties.<n>This paper pioneers the study of online learning in DRMGs, where agents learn directly from environmental interactions without prior data.
arXiv Detail & Related papers (2025-08-04T23:14:32Z) - Robust and Computation-Aware Gaussian Processes [20.948688720498644]
We introduce Robust Computation-aware Gaussian Process (RCaGP), a novel GP model that combines a principled treatment of approximation-induced uncertainty with robust generalized Bayesian updating.<n>Our model ensures more conservative and reliable uncertainty estimates, a property we rigorously demonstrate.<n> Empirical results confirm that solving these challenges jointly leads to superior performance across both clean and outlier-contaminated settings.
arXiv Detail & Related papers (2025-05-27T12:49:14Z) - A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.