Performance of Hyperbolic Geometry Models on Top-N Recommendation Tasks
- URL: http://arxiv.org/abs/2008.06716v1
- Date: Sat, 15 Aug 2020 13:21:10 GMT
- Title: Performance of Hyperbolic Geometry Models on Top-N Recommendation Tasks
- Authors: Leyla Mirvakhabova, Evgeny Frolov, Valentin Khrulkov, Ivan Oseledets,
Alexander Tuzhilin
- Abstract summary: We introduce a simple autoencoder based on hyperbolic geometry for solving standard collaborative filtering problem.
In contrast to many modern deep learning techniques, we build our solution using only a single hidden layer.
- Score: 72.62702932371148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a simple autoencoder based on hyperbolic geometry for solving
standard collaborative filtering problem. In contrast to many modern deep
learning techniques, we build our solution using only a single hidden layer.
Remarkably, even with such a minimalistic approach, we not only outperform the
Euclidean counterpart but also achieve a competitive performance with respect
to the current state-of-the-art. We additionally explore the effects of space
curvature on the quality of hyperbolic models and propose an efficient
data-driven method for estimating its optimal value.
Related papers
- Convex Distillation: Efficient Compression of Deep Networks via Convex Optimization [46.18363767705346]
Deployment of large and complex convex networks on resource-constrained devices significant challenges due to their demands.
We introduce novel distillation technique that efficiently compresses model via this model via this paper.
Our approach enables performance comparable to original model without requiring any post-processing.
arXiv Detail & Related papers (2024-10-09T06:04:52Z) - Coupling Fairness and Pruning in a Single Run: a Bi-level Optimization
Perspective [17.394732703591462]
We propose a framework to jointly optimize the pruning mask and weight update processes with fairness constraints.
This framework is engineered to compress models that maintain performance while ensuring fairness in a single execution.
Our empirical analysis contrasts our framework with several mainstream pruning strategies, emphasizing our method's superiority in maintaining model fairness, performance, and efficiency.
arXiv Detail & Related papers (2023-12-15T20:08:53Z) - Refined Coreset Selection: Towards Minimal Coreset Size under Model
Performance Constraints [69.27190330994635]
Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.
We propose an innovative method, which maintains optimization priority order over the model performance and coreset size.
Empirically, extensive experiments confirm its superiority, often yielding better model performance with smaller coreset sizes.
arXiv Detail & Related papers (2023-11-15T03:43:04Z) - Riemannian Low-Rank Model Compression for Federated Learning with
Over-the-Air Aggregation [2.741266294612776]
Low-rank model compression is a widely used technique for reducing the computational load when training machine learning models.
Existing compression techniques are not directly applicable to efficient over-the-air (OTA) aggregation in federated learning systems.
We propose a novel manifold optimization formulation for low-rank model compression in FL that does not relax the low-rank constraint.
arXiv Detail & Related papers (2023-06-04T18:32:50Z) - SING: A Plug-and-Play DNN Learning Technique [25.563053353709627]
We propose SING (StabIlized and Normalized Gradient), a plug-and-play technique that improves the stability and robustness of Adam(W)
SING is straightforward to implement and has minimal computational overhead, requiring only a layer-wise standardization of gradients fed to Adam(W)
arXiv Detail & Related papers (2023-05-25T12:39:45Z) - Robust Model-Based Optimization for Challenging Fitness Landscapes [96.63655543085258]
Protein design involves optimization on a fitness landscape.
Leading methods are challenged by sparsity of high-fitness samples in the training set.
We show that this problem of "separation" in the design space is a significant bottleneck in existing model-based optimization tools.
We propose a new approach that uses a novel VAE as its search model to overcome the problem.
arXiv Detail & Related papers (2023-05-23T03:47:32Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - EBJR: Energy-Based Joint Reasoning for Adaptive Inference [10.447353952054492]
State-of-the-art deep learning models have achieved significant performance levels on various benchmarks.
Light-weight architectures, on the other hand, achieve moderate accuracies, but at a much more desirable latency.
This paper presents a new method of jointly using the large accurate models together with the small fast ones.
arXiv Detail & Related papers (2021-10-20T02:33:31Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.