Evolving Deep Neural Networks for Collaborative Filtering
- URL: http://arxiv.org/abs/2111.07758v1
- Date: Mon, 15 Nov 2021 13:57:31 GMT
- Title: Evolving Deep Neural Networks for Collaborative Filtering
- Authors: Yuhan Fang, Yuqiao Liu and Yanan Sun
- Abstract summary: Collaborative Filtering (CF) is widely used in recommender systems to model user-item interactions.
We introduce the genetic algorithm into the process of designing Deep Neural Networks (DNNs)
- Score: 3.302151868255641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative Filtering (CF) is widely used in recommender systems to model
user-item interactions. With the great success of Deep Neural Networks (DNNs)
in various fields, advanced works recently have proposed several DNN-based
models for CF, which have been proven effective. However, the neural networks
are all designed manually. As a consequence, it requires the designers to
develop expertise in both CF and DNNs, which limits the application of deep
learning methods in CF and the accuracy of recommended results. In this paper,
we introduce the genetic algorithm into the process of designing DNNs. By means
of genetic operations like crossover, mutation, and environmental selection
strategy, the architectures and the connection weights initialization of the
DNNs can be designed automatically. We conduct extensive experiments on two
benchmark datasets. The results demonstrate the proposed algorithm outperforms
several manually designed state-of-the-art neural networks.
Related papers
- Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks [0.0]
In designing artificial neural networks, one crucial aspect of the innovative approach is suggesting a novel neural architecture.
In this work, we use pure Genetic Programming Approach to design CNNs, which employs only one genetic operation.
In the course of preliminary experiments, our methodology yields promising results.
arXiv Detail & Related papers (2024-09-30T18:10:06Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - From Alexnet to Transformers: Measuring the Non-linearity of Deep Neural Networks with Affine Optimal Transport [32.39176908225668]
We introduce the concept of the non-linearity signature of DNN, the first theoretically sound solution for measuring the non-linearity of deep neural networks.
We provide extensive experimental results that highlight the practical usefulness of the proposed non-linearity signature.
arXiv Detail & Related papers (2023-10-17T17:50:22Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - Optimizing Deep Neural Networks through Neuroevolution with Stochastic
Gradient Descent [18.70093247050813]
gradient descent (SGD) is dominant in training a deep neural network (DNN)
Neuroevolution is more in line with an evolutionary process and provides some key capabilities that are often unavailable in SGD.
A hierarchical cluster-based suppression algorithm is also developed to overcome similar weight updates among individuals for improving population diversity.
arXiv Detail & Related papers (2020-12-21T08:54:14Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.