Studying and Recommending Information Highlighting in Stack Overflow Answers
- URL: http://arxiv.org/abs/2401.01472v3
- Date: Thu, 25 Apr 2024 22:18:27 GMT
- Title: Studying and Recommending Information Highlighting in Stack Overflow Answers
- Authors: Shahla Shaan Ahmed, Shaowei Wang, Yuan Tian, Tse-Hsun, Chen, Haoxiang Zhang,
- Abstract summary: We studied 31,169,429 answers of Stack Overflow.
For training recommendation models, we choose CNN-based and BERT-based models for each type of formatting.
Our models achieve a precision ranging from 0.50 to 0.72 for different formats.
- Score: 47.98908661334215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context: Navigating the knowledge of Stack Overflow (SO) remains challenging. To make the posts vivid to users, SO allows users to write and edit posts with Markdown or HTML so that users can leverage various formatting styles (e.g., bold, italic, and code) to highlight the important information. Nonetheless, there have been limited studies on the highlighted information. Objective: We carried out the first large-scale exploratory study on the information highlighted in SO answers in our recent study. To extend our previous study, we develop approaches to automatically recommend highlighted content with formatting styles using neural network architectures initially designed for the Named Entity Recognition task. Method: In this paper, we studied 31,169,429 answers of Stack Overflow. For training recommendation models, we choose CNN-based and BERT-based models for each type of formatting (i.e., Bold, Italic, Code, and Heading) using the information highlighting dataset we collected from SO answers. Results: Our models achieve a precision ranging from 0.50 to 0.72 for different formatting types. It is easier to build a model to recommend Code than other types. Models for text formatting types (i.e., Heading, Bold, and Italic) suffer low recall. Our analysis of failure cases indicates that the majority of the failure cases are due to missing identification. One explanation is that the models are easy to learn the frequent highlighted words while struggling to learn less frequent words (i.g., long-tail knowledge). Conclusion: Our findings suggest that it is possible to develop recommendation models for highlighting information for answers with different formatting styles on Stack Overflow.
Related papers
- Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models [11.597314728459573]
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages.
We propose STORM, a writing system for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking.
arXiv Detail & Related papers (2024-02-22T01:20:17Z) - EIGEN: Expert-Informed Joint Learning Aggregation for High-Fidelity
Information Extraction from Document Images [27.36816896426097]
Information Extraction from document images is challenging due to the high variability of layout formats.
We propose a novel approach, EIGEN, which combines rule-based methods with deep learning models using data programming approaches.
We empirically show that our EIGEN framework can significantly improve the performance of state-of-the-art deep models with the availability of very few labeled data instances.
arXiv Detail & Related papers (2023-11-23T13:20:42Z) - Large Language Models Meet Knowledge Graphs to Answer Factoid Questions [57.47634017738877]
We propose a method for exploring pre-trained Text-to-Text Language Models enriched with additional information from Knowledge Graphs.
We procure easily interpreted information with Transformer-based models through the linearization of the extracted subgraphs.
Final re-ranking of the answer candidates with the extracted information boosts Hits@1 scores of the pre-trained text-to-text language models by 4-6%.
arXiv Detail & Related papers (2023-10-03T15:57:00Z) - Representation Learning for Stack Overflow Posts: How Far are We? [14.520780251680586]
State-of-the-art Stack Overflow post representation models are Post2Vec and BERTOverflow.
Despite their promising results, these representation methods have not been evaluated in the same experimental setting.
We propose SOBERT, which employs a simple-yet-effective strategy to improve the best-performing model.
arXiv Detail & Related papers (2023-03-13T04:49:06Z) - Ensemble Transfer Learning for Multilingual Coreference Resolution [60.409789753164944]
A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data.
We design a simple but effective ensemble-based framework that combines various transfer learning techniques.
We also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts.
arXiv Detail & Related papers (2023-01-22T18:22:55Z) - DapStep: Deep Assignee Prediction for Stack Trace Error rePresentation [61.99379022383108]
We propose new deep learning models to solve the bug triage problem.
The models are based on a bidirectional recurrent neural network with attention and on a convolutional neural network.
To improve the quality of ranking, we propose using additional information from version control system annotations.
arXiv Detail & Related papers (2022-01-14T00:16:57Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.