Modeling Behavioral Patterns in News Recommendations Using Fuzzy Neural Networks
- URL: http://arxiv.org/abs/2601.04019v1
- Date: Wed, 07 Jan 2026 15:34:15 GMT
- Title: Modeling Behavioral Patterns in News Recommendations Using Fuzzy Neural Networks
- Authors: Kevin Innerebner, Stephan Bartl, Markus Reiter-Haas, Elisabeth Lex,
- Abstract summary: We introduce a transparent recommender system that uses fuzzy neural networks to learn human-readable rules for predicting article clicks.<n>We show that we can accurately predict click behavior compared to several established baselines, while learning human-readable rules.
- Score: 3.2047979871770154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: News recommender systems are increasingly driven by black-box models, offering little transparency for editorial decision-making. In this work, we introduce a transparent recommender system that uses fuzzy neural networks to learn human-readable rules from behavioral data for predicting article clicks. By extracting the rules at configurable thresholds, we can control rule complexity and thus, the level of interpretability. We evaluate our approach on two publicly available news datasets (i.e., MIND and EB-NeRD) and show that we can accurately predict click behavior compared to several established baselines, while learning human-readable rules. Furthermore, we show that the learned rules reveal news consumption patterns, enabling editors to align content curation goals with target audience behavior.
Related papers
- Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar Hand Gesture Recognition [2.99664686845172]
Rule-based models offer interpretability but struggle with complex data, while deep neural networks excel in performance yet lack transparency.<n>This work investigates a neuro-symbolic rule learning neural network named RL-Net that learns interpretable rule lists.<n>We benchmark RL-Net against a fully transparent rule-based system (MIRA) and an explainable black-box model (XentricAI)<n>Our results show that RL-Net achieves a favorable trade-off, maintaining strong performance (93.03% F1) while significantly reducing rule complexity.
arXiv Detail & Related papers (2025-06-11T11:30:48Z) - Differentiable Fuzzy Neural Networks for Recommender Systems [2.7309692684728613]
We investigate using fuzzy neural networks as a neuro-symbolic approach for recommendations.<n>Each rule corresponds to a fuzzy logic expression, making the recommender's decision process inherently transparent.<n>Our results demonstrate that our approach accurately captures user behavior while providing a transparent decision-making process.
arXiv Detail & Related papers (2025-05-09T12:31:56Z) - RuleAgent: Discovering Rules for Recommendation Denoising with Autonomous Language Agents [36.31706728494194]
RuleAgent mimics real-world data experts to autonomously discover rules for recommendation denoising.<n>LossEraser-an unlearning strategy streamlines training without compromising denoising performance.
arXiv Detail & Related papers (2025-03-30T09:19:03Z) - Rule Based Learning with Dynamic (Graph) Neural Networks [0.8158530638728501]
We present rule based graph neural networks (RuleGNNs) that overcome some limitations of ordinary graph neural networks.
Our experiments show that the predictive performance of RuleGNNs is comparable to state-of-the-art graph classifiers.
We introduce new synthetic benchmark graph datasets to show how to integrate expert knowledge into RuleGNNs.
arXiv Detail & Related papers (2024-06-14T12:01:18Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Rule By Example: Harnessing Logical Rules for Explainable Hate Speech
Detection [13.772240348963303]
Rule By Example (RBE) is a novel-based contrastive learning approach for learning from logical rules for the task of textual content moderation.
RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches.
arXiv Detail & Related papers (2023-07-24T16:55:37Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.