Neural Representations in Hybrid Recommender Systems: Prediction versus
Regularization
- URL: http://arxiv.org/abs/2010.06070v1
- Date: Mon, 12 Oct 2020 23:12:49 GMT
- Title: Neural Representations in Hybrid Recommender Systems: Prediction versus
Regularization
- Authors: Ramin Raziperchikolaei, Tianyu Li, Young-joo Chung
- Abstract summary: We define the neural representation for prediction (NRP) framework and apply it to the autoencoder-based recommendation systems.
We also apply the NRP framework to a direct neural network structure which predicts the ratings without reconstructing the user and item information.
The results confirm that neural representations are better for prediction than regularization and show that the NRP framework, combined with the direct neural network structure, outperforms the state-of-the-art methods in the prediction task.
- Score: 8.384351067134999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoder-based hybrid recommender systems have become popular recently
because of their ability to learn user and item representations by
reconstructing various information sources, including users' feedback on items
(e.g., ratings) and side information of users and items (e.g., users'
occupation and items' title). However, existing systems still use
representations learned by matrix factorization (MF) to predict the rating,
while using representations learned by neural networks as the regularizer. In
this paper, we define the neural representation for prediction (NRP) framework
and apply it to the autoencoder-based recommendation systems. We theoretically
analyze how our objective function is related to the previous MF and
autoencoder-based methods and explain what it means to use neural
representations as the regularizer. We also apply the NRP framework to a direct
neural network structure which predicts the ratings without reconstructing the
user and item information. We conduct extensive experiments on two MovieLens
datasets and two real-world e-commerce datasets. The results confirm that
neural representations are better for prediction than regularization and show
that the NRP framework, combined with the direct neural network structure,
outperforms the state-of-the-art methods in the prediction task, with less
training time and memory.
Related papers
- Linear-Time Graph Neural Networks for Scalable Recommendations [50.45612795600707]
The key of recommender systems is to forecast users' future behaviors based on previous user-item interactions.
Recent years have witnessed a rising interest in leveraging Graph Neural Networks (GNNs) to boost the prediction performance of recommender systems.
We propose a Linear-Time Graph Neural Network (LTGNN) to scale up GNN-based recommender systems to achieve comparable scalability as classic MF approaches.
arXiv Detail & Related papers (2024-02-21T17:58:10Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Improving Prediction Confidence in Learning-Enabled Autonomous Systems [2.66512000865131]
We utilize a feedback loop between learning-enabled components used for classification and the sensors of an autonomous system in order to improve the confidence of the predictions.
We design a classifier using Inductive Conformal Prediction (ICP) based on a triplet network architecture in order to learn representations that can be used to quantify the similarity between test and training examples.
A feedback loop that queries the sensors for a new input is used to further refine the predictions and increase the classification accuracy.
arXiv Detail & Related papers (2021-10-07T00:40:34Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Initialization Matters: Regularizing Manifold-informed Initialization
for Neural Recommendation Systems [47.49065927541129]
We propose a new scheme for user embeddings called Laplacian Eigenmaps with Popularity-based Regularization for Isolated Data (LEPORID)
LEPORID endows the embeddings with information regarding multi-scale neighborhood structures on the data manifold and performs adaptive regularization to compensate for high embedding variance on the tail of the data distribution.
We show that existing neural systems with LEPORID often perform on par or better than KNN.
arXiv Detail & Related papers (2021-06-09T11:26:18Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Representation Extraction and Deep Neural Recommendation for
Collaborative Filtering [9.367612782346207]
This paper investigates the usage of novel representation learning algorithms to extract users and items representations from rating matrix.
We propose a modular algorithm consisted of two main phases: REpresentation eXtraction and a deep neural NETwork (RexNet)
RexNet is not dependent on unstructured auxiliary data such as visual and textual information, instead, it uses only the user-item rate matrix as its input.
arXiv Detail & Related papers (2020-12-09T11:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.