Differentiable Fuzzy Neural Networks for Recommender Systems
- URL: http://arxiv.org/abs/2505.06000v1
- Date: Fri, 09 May 2025 12:31:56 GMT
- Title: Differentiable Fuzzy Neural Networks for Recommender Systems
- Authors: Stephan Bartl, Kevin Innerebner, Elisabeth Lex,
- Abstract summary: We investigate using fuzzy neural networks as a neuro-symbolic approach for recommendations.<n>Each rule corresponds to a fuzzy logic expression, making the recommender's decision process inherently transparent.<n>Our results demonstrate that our approach accurately captures user behavior while providing a transparent decision-making process.
- Score: 2.7309692684728613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As recommender systems become increasingly complex, transparency is essential to increase user trust, accountability, and regulatory compliance. Neuro-symbolic approaches that integrate symbolic reasoning with sub-symbolic learning offer a promising approach toward transparent and user-centric systems. In this work-in-progress, we investigate using fuzzy neural networks (FNNs) as a neuro-symbolic approach for recommendations that learn logic-based rules over predefined, human-readable atoms. Each rule corresponds to a fuzzy logic expression, making the recommender's decision process inherently transparent. In contrast to black-box machine learning methods, our approach reveals the reasoning behind a recommendation while maintaining competitive performance. We evaluate our method on a synthetic and MovieLens 1M datasets and compare it to state-of-the-art recommendation algorithms. Our results demonstrate that our approach accurately captures user behavior while providing a transparent decision-making process. Finally, the differentiable nature of this approach facilitates an integration with other neural models, enabling the development of hybrid, transparent recommender systems.
Related papers
- Modeling Behavioral Patterns in News Recommendations Using Fuzzy Neural Networks [3.2047979871770154]
We introduce a transparent recommender system that uses fuzzy neural networks to learn human-readable rules for predicting article clicks.<n>We show that we can accurately predict click behavior compared to several established baselines, while learning human-readable rules.
arXiv Detail & Related papers (2026-01-07T15:34:15Z) - Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - Interpretable Reward Modeling with Active Concept Bottlenecks [54.00085739303773]
We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning.<n>Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts.<n>We formalize an active learning strategy that dynamically acquires the most informative concept labels.
arXiv Detail & Related papers (2025-07-07T06:26:04Z) - Learning Interpretable Rules from Neural Networks: Neurosymbolic AI for Radar Hand Gesture Recognition [2.99664686845172]
Rule-based models offer interpretability but struggle with complex data, while deep neural networks excel in performance yet lack transparency.<n>This work investigates a neuro-symbolic rule learning neural network named RL-Net that learns interpretable rule lists.<n>We benchmark RL-Net against a fully transparent rule-based system (MIRA) and an explainable black-box model (XentricAI)<n>Our results show that RL-Net achieves a favorable trade-off, maintaining strong performance (93.03% F1) while significantly reducing rule complexity.
arXiv Detail & Related papers (2025-06-11T11:30:48Z) - Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Hybrid Personalization Using Declarative and Procedural Memory Modules of the Cognitive Architecture ACT-R [9.73847865216389]
We propose a hybrid user modeling framework based on the cognitive architecture ACT-R.<n>We aim to provide more transparent recommendations, enable rule-based explanations, and facilitate the modeling of cognitive biases.
arXiv Detail & Related papers (2025-05-08T09:32:04Z) - Why am I seeing this? Towards recognizing social media recommender systems with missing recommendations [4.242821809663174]
We introduce a method for Automatic Recommender Recognition using Graph Neural Networks (GNNs)<n>Our approach enables accurate detection of hidden recommenders and their influence on user behavior.<n>This study provides insights into how recommenders shape behavior, aiding efforts to reduce polarization and misinformation.
arXiv Detail & Related papers (2025-04-15T09:16:17Z) - Neural network interpretability with layer-wise relevance propagation: novel techniques for neuron selection and visualization [0.49478969093606673]
We present a novel approach that improves the parsing of selected neurons during.<n>LRP backward propagation, using the Visual Geometry Group 16 (VGG16) architecture as a case study.<n>Our approach enhances interpretability and supports the development of more transparent artificial intelligence (AI) systems for computer vision applications.
arXiv Detail & Related papers (2024-12-07T15:49:14Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Representation Engineering: A Top-Down Approach to AI Transparency [130.33981757928166]
We identify and characterize the emerging area of representation engineering (RepE)<n>RepE places population-level representations, rather than neurons or circuits, at the center of analysis.<n>We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - An Interactive Explanatory AI System for Industrial Quality Control [0.8889304968879161]
We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
arXiv Detail & Related papers (2022-03-17T09:04:46Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.