Simultaneous Learning of the Inputs and Parameters in Neural
Collaborative Filtering
- URL: http://arxiv.org/abs/2203.07463v1
- Date: Mon, 14 Mar 2022 19:47:38 GMT
- Title: Simultaneous Learning of the Inputs and Parameters in Neural
Collaborative Filtering
- Authors: Ramin Raziperchikolaei and Young-joo Chung
- Abstract summary: We show that the non-zero elements of the inputs are learnable parameters that determine the weights in combining the user/item embeddings.
We propose to learn the value of the non-zero elements of the inputs jointly with the neural network parameters.
- Score: 5.076419064097734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network-based collaborative filtering systems focus on designing
network architectures to learn better representations while fixing the input to
the user/item interaction vectors and/or ID. In this paper, we first show that
the non-zero elements of the inputs are learnable parameters that determine the
weights in combining the user/item embeddings, and fixing them limits the power
of the models in learning the representations. Then, we propose to learn the
value of the non-zero elements of the inputs jointly with the neural network
parameters. We analyze the model complexity and the empirical risk of our
approach and prove that learning the input leads to a better generalization
bound. Our experiments on several real-world datasets show that our method
outperforms the state-of-the-art methods, even using shallow network structures
with a smaller number of layers and parameters.
Related papers
- Steinmetz Neural Networks for Complex-Valued Data [23.80312814400945]
We introduce a new approach to processing complex-valued data using DNNs consisting of parallel real-valuedetzworks with coupled outputs.
Our proposed class of architectures, referred to as Steinmetz Neural Networks, leverage multi-view learning to construct more interpretable representations within the latent space.
Our numerical experiments depict the improved performance and to additive noise, afforded by these networks on benchmark datasets and synthetic examples.
arXiv Detail & Related papers (2024-09-16T08:26:06Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Complexity of Representations in Deep Learning [2.0219767626075438]
We analyze the effectiveness of the learned representations in separating the classes from a data complexity perspective.
We show how the data complexity evolves through the network, how it changes during training, and how it is impacted by the network design and the availability of training samples.
arXiv Detail & Related papers (2022-09-01T15:20:21Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.