SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation
System
- URL: http://arxiv.org/abs/2205.05940v1
- Date: Thu, 12 May 2022 08:08:22 GMT
- Title: SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation
System
- Authors: Duc H. Le, Tram T. Doan, Son T. Huynh, and Binh T. Nguyen
- Abstract summary: This study proposes a transformer-based model using transfer learning as an efficient approach for the paper submission recommendation system.
By combining essential information (such as the title, the abstract, and the list of keywords) with the aims and scopes of journals, the model can recommend the Top K journals that maximize the acceptance of the paper.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The recommendation system plays a vital role in many areas, especially
academic fields, to support researchers in submitting and increasing the
acceptance of their work through the conference or journal selection process.
This study proposes a transformer-based model using transfer learning as an
efficient approach for the paper submission recommendation system. By combining
essential information (such as the title, the abstract, and the list of
keywords) with the aims and scopes of journals, the model can recommend the Top
K journals that maximize the acceptance of the paper. Our model had developed
through two states: (i) Fine-tuning the pre-trained language model (LM) with a
simple contrastive learning framework. We utilized a simple supervised
contrastive objective to fine-tune all parameters, encouraging the LM to learn
the document representation effectively. (ii) The fine-tuned LM was then
trained on different combinations of the features for the downstream task. This
study suggests a more advanced method for enhancing the efficiency of the paper
submission recommendation system compared to previous approaches when we
respectively achieve 0.5173, 0.8097, 0.8862, 0.9496 for Top 1, 3, 5, and 10
accuracies on the test set for combining the title, abstract, and keywords as
input features. Incorporating the journals' aims and scopes, our model shows an
exciting result by getting 0.5194, 0.8112, 0.8866, and 0.9496 respective to Top
1, 3, 5, and 10.
Related papers
- STAR: A Simple Training-free Approach for Recommendations using Large Language Models [36.18841135511487]
Recent progress in large language models (LLMs) offers promising new approaches for recommendation system (RecSys) tasks.
We propose a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning.
Our method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys and Games, and -1.8% on Sports and Outdoors.
arXiv Detail & Related papers (2024-10-21T19:34:40Z) - Uncertainty-Aware Explainable Recommendation with Large Language Models [15.229417987212631]
We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
arXiv Detail & Related papers (2024-01-31T14:06:26Z) - Instruction Distillation Makes Large Language Models Efficient Zero-shot
Rankers [56.12593882838412]
We introduce a novel instruction distillation method to rank documents.
We first rank documents using the effective pairwise approach with complex instructions, and then distill the teacher predictions to the pointwise approach with simpler instructions.
Our approach surpasses the performance of existing supervised methods like monoT5 and is on par with the state-of-the-art zero-shot methods.
arXiv Detail & Related papers (2023-11-02T19:16:21Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - MIReAD: Simple Method for Learning High-quality Representations from
Scientific Documents [77.34726150561087]
We propose MIReAD, a simple method that learns high-quality representations of scientific papers.
We train MIReAD on more than 500,000 PubMed and arXiv abstracts across over 2,000 journal classes.
arXiv Detail & Related papers (2023-05-07T03:29:55Z) - Large Language Models in the Workplace: A Case Study on Prompt
Engineering for Job Type Classification [58.720142291102135]
This case study investigates the task of job classification in a real-world setting.
The goal is to determine whether an English-language job posting is appropriate for a graduate or entry-level position.
arXiv Detail & Related papers (2023-03-13T14:09:53Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - FPSRS: A Fusion Approach for Paper Submission Recommendation System [0.0]
This paper presents two newer approaches for recommending scientific articles.
The first approach employs RNN structures besides using Conv1D.
We also introduce a new method, namely DistilBertAims, using DistillBert for two cases of uppercase and lower-case words to vectorize features such as Title, Abstract, and Keywords.
The experimental results show that the second approach could obtain a better performance, which is 62.46% and 12.44% higher than the best of the previous study.
arXiv Detail & Related papers (2022-05-12T09:06:56Z) - Integrating Semantics and Neighborhood Information with Graph-Driven
Generative Models for Document Retrieval [51.823187647843945]
In this paper, we encode the neighborhood information with a graph-induced Gaussian distribution, and propose to integrate the two types of information with a graph-driven generative model.
Under the approximation, we prove that the training objective can be decomposed into terms involving only singleton or pairwise documents, enabling the model to be trained as efficiently as uncorrelated ones.
arXiv Detail & Related papers (2021-05-27T11:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.