LT-OCF: Learnable-Time ODE-based Collaborative Filtering
- URL: http://arxiv.org/abs/2108.06208v3
- Date: Wed, 18 Aug 2021 05:15:58 GMT
- Title: LT-OCF: Learnable-Time ODE-based Collaborative Filtering
- Authors: Jeongwhan Choi, Jinsung Jeon, Noseong Park
- Abstract summary: Collaborative filtering (CF) is a long-standing problem of recommender systems.
We present Learnable-Time ODE-based Collaborative Filtering (LT-OCF)
Our method consistently shows better accuracy than existing methods.
- Score: 4.5100819863628825
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Collaborative filtering (CF) is a long-standing problem of recommender
systems. Many novel methods have been proposed, ranging from classical matrix
factorization to recent graph convolutional network-based approaches. After
recent fierce debates, researchers started to focus on linear graph
convolutional networks (GCNs) with a layer combination, which show
state-of-the-art accuracy in many datasets. In this work, we extend them based
on neural ordinary differential equations (NODEs), because the linear GCN
concept can be interpreted as a differential equation, and present the method
of Learnable-Time ODE-based Collaborative Filtering (LT-OCF). The main novelty
in our method is that after redesigning linear GCNs on top of the NODE regime,
i) we learn the optimal architecture rather than relying on manually designed
ones, ii) we learn smooth ODE solutions that are considered suitable for CF,
and iii) we test with various ODE solvers that internally build a diverse set
of neural network connections. We also present a novel training method
specialized to our method. In our experiments with three benchmark datasets,
Gowalla, Yelp2018, and Amazon-Book, our method consistently shows better
accuracy than existing methods, e.g., a recall of 0.0411 by LightGCN vs. 0.0442
by LT-OCF and an NDCG of 0.0315 by LightGCN vs. 0.0341 by LT-OCF in
Amazon-Book. One more important discovery in our experiments that is worth
mentioning is that our best accuracy was achieved by dense connections rather
than linear connections.
Related papers
- Graph Neural Controlled Differential Equations For Collaborative Filtering [37.98767924798175]
We introduce a new method called Graph Neural Controlled Differential Equations for Collaborative Filtering (CDE-CF)
Our method improves the performance of the Graph ODE-based method by incorporating weight control in a continuous manner.
arXiv Detail & Related papers (2025-01-23T18:37:12Z) - GP-FL: Model-Based Hessian Estimation for Second-Order Over-the-Air Federated Learning [52.295563400314094]
Second-order methods are widely adopted to improve the convergence rate of learning algorithms.
This paper introduces a novel second-order FL framework tailored for wireless channels.
arXiv Detail & Related papers (2024-12-05T04:27:41Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Graph Neural Ordinary Differential Equations-based method for
Collaborative Filtering [40.39806741673175]
We propose a Graph Neural Ordinary Differential Equation-based method for Collaborative Filtering (GODE-CF)
This method estimates the final embedding by utilizing the information captured by one or two GCN layers.
We show that our proposed GODE-CF model has several advantages over traditional GCN-based models.
arXiv Detail & Related papers (2023-11-21T03:42:15Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Accelerated Linearized Laplace Approximation for Bayesian Deep Learning [34.81292720605279]
We develop a Nystrom approximation to neural tangent kernels (NTKs) to accelerate LLA.
Our method benefits from the capability of popular deep learning libraries for forward mode automatic differentiation.
Our method can even scale up to architectures like vision transformers.
arXiv Detail & Related papers (2022-10-23T07:49:03Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - How Powerful is Graph Convolution for Recommendation? [21.850817998277158]
Graph convolutional networks (GCNs) have recently enabled a popular class of algorithms for collaborative filtering (CF)
In this paper, we endeavor to obtain a better understanding of GCN-based CF methods via the lens of graph signal processing.
arXiv Detail & Related papers (2021-08-17T11:38:18Z) - Physics-Based Deep Learning for Fiber-Optic Communication Systems [10.630021520220653]
We propose a new machine-learning approach for fiber-optic communication systems governed by the nonlinear Schr"odinger equation (NLSE)
Our main observation is that the popular split-step method (SSM) for numerically solving the NLSE has essentially the same functional form as a deep multi-layer neural network.
We exploit this connection by parameterizing the SSM and viewing the linear steps as general linear functions, similar to the weight matrices in a neural network.
arXiv Detail & Related papers (2020-10-27T12:55:23Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.