Multi-task recommendation system for scientific papers with high-way
networks
- URL: http://arxiv.org/abs/2204.09930v1
- Date: Thu, 21 Apr 2022 07:40:47 GMT
- Title: Multi-task recommendation system for scientific papers with high-way
networks
- Authors: Aram Karimi, Simon Dobnik
- Abstract summary: We present a multi-task recommendation system (RS) that predicts a paper recommendation and generates its meta-data such as keywords.
The motivation behind this approach is that the paper's topics expressed as keywords are a useful predictor of preferences of researchers.
Our application uses Highway networks to train the system very deep, combine the benefits of RNN and CNN to find the most important factor and make latent representation.
- Score: 1.5229257192293197
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Finding and selecting the most relevant scientific papers from a large number
of papers written in a research community is one of the key challenges for
researchers these days. As we know, much information around research interest
for scholars and academicians belongs to papers they read. Analysis and
extracting contextual features from these papers could help us to suggest the
most related paper to them. In this paper, we present a multi-task
recommendation system (RS) that predicts a paper recommendation and generates
its meta-data such as keywords. The system is implemented as a three-stage deep
neural network encoder that tries to maps longer sequences of text to an
embedding vector and learns simultaneously to predict the recommendation rate
for a particular user and the paper's keywords. The motivation behind this
approach is that the paper's topics expressed as keywords are a useful
predictor of preferences of researchers. To achieve this goal, we use a system
combination of RNNs, Highway and Convolutional Neural Networks to train
end-to-end a context-aware collaborative matrix. Our application uses Highway
networks to train the system very deep, combine the benefits of RNN and CNN to
find the most important factor and make latent representation. Highway Networks
allow us to enhance the traditional RNN and CNN pipeline by learning more
sophisticated semantic structural representations. Using this method we can
also overcome the cold start problem and learn latent features over large
sequences of text.
Related papers
- Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep Learning Architecture for Automatic Essay Scoring [0.0]
We propose a novel architecture based on recurrent networks (RNN) and convolution neural network (CNN)
In the proposed architecture, the multichannel convolutional layer learns and captures the contextual features of the word n-gram from the word embedding vectors.
Our proposed system achieves significantly higher grading accuracy than other deep learning-based AES systems.
arXiv Detail & Related papers (2022-06-16T14:56:24Z) - TeKo: Text-Rich Graph Neural Networks with External Knowledge [75.91477450060808]
We propose a novel text-rich graph neural network with external knowledge (TeKo)
We first present a flexible heterogeneous semantic network that incorporates high-quality entities.
We then introduce two types of external knowledge, that is, structured triplets and unstructured entity description.
arXiv Detail & Related papers (2022-06-15T02:33:10Z) - How Can Graph Neural Networks Help Document Retrieval: A Case Study on
CORD19 with Concept Map Generation [14.722791874800617]
Graph neural networks (GNNs) are powerful tools for representation learning on irregular data.
With unstructured texts represented as concept maps, GNNs can be exploited for tasks like document retrieval.
We conduct an empirical study on a large-scale multi-discipline dataset CORD-19.
Results show that our proposed semantics-oriented graph functions achieve better and more stable performance based on the BM25 retrieved candidates.
arXiv Detail & Related papers (2022-01-12T19:52:29Z) - Tell Me How to Survey: Literature Review Made Simple with Automatic
Reading Path Generation [16.07200776251764]
How to glean papers worth reading from the massive literature to do a quick survey or keep up with the latest advancement about a specific research topic has become a challenging task.
Existing academic search engines such as Google Scholar return relevant papers by individually calculating the relevance between each paper and query.
We introduce Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query.
arXiv Detail & Related papers (2021-10-12T20:58:46Z) - Applications of Recurrent Neural Network for Biometric Authentication &
Anomaly Detection [0.0]
Recurrent Neural Networks are powerful machine learning frameworks that allow for data to be saved and referenced in a temporal sequence.
This paper seeks to explore current research being conducted on RNNs in four very important areas, being biometric authentication, expression recognition, anomaly detection, and applications to aircraft.
arXiv Detail & Related papers (2021-09-13T04:37:18Z) - From Symbols to Embeddings: A Tale of Two Representations in
Computational Social Science [77.5409807529667]
The study of Computational Social Science (CSS) is data-driven and significantly benefits from the availability of online user-generated contents and social networks.
To explore the answer, we give a thorough review of data representations in CSS for both text and network.
We present the applications of the above representations based on the investigation of more than 400 research articles from 6 top venues involved with CSS.
arXiv Detail & Related papers (2021-06-27T11:04:44Z) - Recent Advances in Large Margin Learning [63.982279380483526]
This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs)
We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively.
arXiv Detail & Related papers (2021-03-25T04:12:00Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Large Scale Subject Category Classification of Scholarly Papers with
Deep Attentive Neural Networks [15.241086410108512]
We propose a deep attentive neural network (DANN) that classifies scholarly papers using only their abstracts.
The proposed network consists of two bi-directional recurrent neural networks followed by an attention layer.
Our best model achieves micro-F1 measure of 0.76 with F1 of individual subject categories ranging from 0.50-0.95.
arXiv Detail & Related papers (2020-07-27T19:42:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.