Tag Prediction of Competitive Programming Problems using Deep Learning
Techniques
- URL: http://arxiv.org/abs/2308.01863v1
- Date: Thu, 3 Aug 2023 16:39:02 GMT
- Title: Tag Prediction of Competitive Programming Problems using Deep Learning
Techniques
- Authors: Taha Lokat, Divyam Prajapati, Shubhada Labde
- Abstract summary: A well-liked method for developing programming abilities is competitive programming.
It can be tough for novices and even veteran programmers to traverse the wide collection of questions.
This can be done using automated tagging of the questions using Text Classification.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In the past decade, the amount of research being done in the fields of
machine learning and deep learning, predominantly in the area of natural
language processing (NLP), has risen dramatically. A well-liked method for
developing programming abilities like logic building and problem solving is
competitive programming. It can be tough for novices and even veteran
programmers to traverse the wide collection of questions due to the massive
number of accessible questions and the variety of themes, levels of difficulty,
and questions offered. In order to help programmers find questions that are
appropriate for their knowledge and interests, there is a need for an automated
method. This can be done using automated tagging of the questions using Text
Classification. Text classification is one of the important tasks widely
researched in the field of Natural Language Processing. In this paper, we
present a way to use text classification techniques to determine the domain of
a competitive programming problem. A variety of models, including are
implemented LSTM, GRU, and MLP. The dataset has been scraped from Codeforces, a
major competitive programming website. A total of 2400 problems were scraped
and preprocessed, which we used as a dataset for our training and testing of
models. The maximum accuracy reached using our model is 78.0% by MLP(Multi
Layer Perceptron).
Related papers
- Estimating Difficulty Levels of Programming Problems with Pre-trained Model [18.92661958433282]
The difficulty level of each programming problem serves as an essential reference for guiding students' adaptive learning.
We formulate the problem of automatic difficulty level estimation of each programming problem, given its textual description and a solution example of code.
For tackling this problem, we propose to couple two pre-trained models, one for text modality and the other for code modality, into a unified model.
arXiv Detail & Related papers (2024-06-13T05:38:20Z) - Probeable Problems for Beginner-level Programming-with-AI Contests [0.0]
We conduct a 2-hour programming contest for undergraduate Computer Science students from multiple institutions.
Students were permitted to work individually or in groups, and were free to use AI tools.
We analyze the extent to which the code submitted by these groups identifies missing details and identify ways in which Probeable Problems can support learning in formal and informal CS educational contexts.
arXiv Detail & Related papers (2024-05-24T00:39:32Z) - Comprehensive Implementation of TextCNN for Enhanced Collaboration between Natural Language Processing and System Recommendation [1.7692743931394748]
This paper analyzes the application status of deep learning in the three core tasks of NLP.
It takes into account the challenges posed by adversarial techniques in text generation, text classification, and semantic parsing.
An empirical study on text classification tasks demonstrates the effectiveness of interactive integration training.
arXiv Detail & Related papers (2024-03-12T07:25:53Z) - Problem-Solving Guide: Predicting the Algorithm Tags and Difficulty for Competitive Programming Problems [7.955313479061445]
Most tech companies require the ability to solve algorithm problems including Google, Meta, and Amazon.
Our study addresses the task of predicting the algorithm tag as a useful tool for engineers and developers.
We also consider predicting the difficulty levels of algorithm problems, which can be used as useful guidance to calculate the required time to solve that problem.
arXiv Detail & Related papers (2023-10-09T15:26:07Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - Detecting Requirements Smells With Deep Learning: Experiences,
Challenges and Future Work [9.44316959798363]
This work aims to improve the previous work by creating a manually labeled dataset and using ensemble learning, Deep Learning (DL), and techniques such as word embeddings and transfer learning to overcome the generalization problem.
The current findings show that the dataset is unbalanced and which class examples should be added more.
arXiv Detail & Related papers (2021-08-06T12:45:15Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Understanding Unnatural Questions Improves Reasoning over Text [54.235828149899625]
Complex question answering (CQA) over raw text is a challenging task.
Learning an effective CQA model requires large amounts of human-annotated data.
We address the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated questions.
arXiv Detail & Related papers (2020-10-19T10:22:16Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.