Tile Networks: Learning Optimal Geometric Layout for Whole-page
Recommendation
- URL: http://arxiv.org/abs/2303.01671v1
- Date: Fri, 3 Mar 2023 02:18:55 GMT
- Title: Tile Networks: Learning Optimal Geometric Layout for Whole-page
Recommendation
- Authors: Shuai Xiao, Zaifan Jiang, Shuang Yang
- Abstract summary: We show it is possible to solve configuration optimization problems for whole-page recommendation using reinforcement learning.
The proposed textitTile Networks is a neural architecture that optimize 2D geometric configurations by arranging items on proper positions.
- Score: 14.951408879079272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding optimal configurations in a geometric space is a key challenge in
many technological disciplines. Current approaches either rely heavily on human
domain expertise and are difficult to scale. In this paper we show it is
possible to solve configuration optimization problems for whole-page
recommendation using reinforcement learning. The proposed \textit{Tile
Networks} is a neural architecture that optimizes 2D geometric configurations
by arranging items on proper positions. Empirical results on real dataset
demonstrate its superior performance compared to traditional learning to rank
approaches and recent deep models.
Related papers
- Equivariant Deep Weight Space Alignment [54.65847470115314]
We propose a novel framework aimed at learning to solve the weight alignment problem.
We first prove that weight alignment adheres to two fundamental symmetries and then, propose a deep architecture that respects these symmetries.
arXiv Detail & Related papers (2023-10-20T10:12:06Z) - Optimization Methods in Deep Learning: A Comprehensive Overview [0.0]
Deep learning has achieved remarkable success in various fields such as image recognition, natural language processing, and speech recognition.
The effectiveness of deep learning largely depends on the optimization methods used to train deep neural networks.
We provide an overview of first-order optimization methods such as Gradient Descent, Adagrad, Adadelta, and RMSprop, as well as recent momentum-based and adaptive gradient methods such as Nesterov accelerated gradient, Adam, Nadam, AdaMax, and AMSGrad.
arXiv Detail & Related papers (2023-02-19T13:01:53Z) - A Survey of Geometric Optimization for Deep Learning: From Euclidean
Space to Riemannian Manifold [7.737713458418288]
Deep Learning (DL) has achieved success in complex Artificial Intelligence (AI) tasks, but it suffers from various notorious problems.
This article presents a comprehensive survey of applying geometric optimization in DL.
It investigates the application of geometric optimization in different DL networks in various AI tasks, e.g., convolution neural network, recurrent neural network, transfer learning, and optimal transport.
arXiv Detail & Related papers (2023-02-16T10:50:15Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - TSO: Curriculum Generation using continuous optimization [0.0]
We present a simple and efficient technique based on continuous optimization.
An encoder network maps/embeds training sequence into continuous space.
A predictor network uses the continuous representation of a strategy as input and predicts the accuracy for fixed network architecture.
arXiv Detail & Related papers (2021-06-16T06:32:21Z) - A Design Space Study for LISTA and Beyond [79.76740811464597]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms.
This paper revisits the role of unrolling as a design approach for deep networks, to what extent its resulting special architecture is superior, and can we find better?
Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models.
arXiv Detail & Related papers (2021-04-08T23:01:52Z) - Physics-consistent deep learning for structural topology optimization [8.391633158275692]
Topology optimization has emerged as a popular approach to refine a component's design and increasing its performance.
Current state-of-the-art topology optimization frameworks are compute-intensive.
In this paper, we explore a deep learning-based framework for performing topology optimization for three-dimensional geometries with a reasonably fine (high) resolution.
arXiv Detail & Related papers (2020-12-09T23:05:55Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.