RITA: A Tool for Automated Requirements Classification and Specification from Online User Feedback
- URL: http://arxiv.org/abs/2601.11362v1
- Date: Fri, 16 Jan 2026 15:18:33 GMT
- Title: RITA: A Tool for Automated Requirements Classification and Specification from Online User Feedback
- Authors: Manjeshwar Aniruddh Mallya, Alessio Ferrari, Mohammad Amin Zadenoori, Jacek DÄ…browski,
- Abstract summary: RITA is a tool that integrates lightweight open-source large language models into a unified workflow for feedback-driven RE.<n>RITA supports automated request classification, non-functional requirement identification, and natural-language requirements specification generation.
- Score: 0.777471208829183
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Context and motivation. Online user feedback is a valuable resource for requirements engineering, but its volume and noise make analysis difficult. Existing tools support individual feedback analysis tasks, but their capabilities are rarely integrated into end-to-end support. Problem. The lack of end-to-end integration limits the practical adoption of existing RE tools and makes it difficult to assess their real-world usefulness. Solution. To address this challenge, we present RITA, a tool that integrates lightweight open-source large language models into a unified workflow for feedback-driven RE. RITA supports automated request classification, non-functional requirement identification, and natural-language requirements specification generation from online feedback via a user-friendly interface, and integrates with Jira for seamless transfer of requirements specifications to development tools. Results and conclusions. RITA exploits previously evaluated LLM-based RE techniques to efficiently transform raw user feedback into requirements artefacts, helping bridge the gap between research and practice. A demonstration is available at: https://youtu.be/8meCLpwQWV8.
Related papers
- ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development [72.4729759618632]
We introduce ABC-Bench, a benchmark to evaluate agentic backend coding within a realistic, executable workflow.<n>We curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories.<n>Our evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks.
arXiv Detail & Related papers (2026-01-16T08:23:52Z) - GUISpector: An MLLM Agent Framework for Automated Verification of Natural Language Requirements in GUI Prototypes [58.197090145723735]
We introduce a novel framework that leverages a multi-modal (M)LLM-based agent for the automated verification of NL requirements in GUI prototypes.<n>GuiSpector extracts detailed NL feedback from the agent's verification process, providing developers with actionable insights.<n>We present an integrated tool that unifies these capabilities, offering an interface for supervising verification runs, inspecting agent rationales and managing the end-to-end requirements verification process.
arXiv Detail & Related papers (2025-10-06T13:15:24Z) - Towards an Efficient, Customizable, and Accessible AI Tutor [5.225254533678075]
We propose an offline Retrieval-Augmented Generation (RAG) pipeline that pairs a small language model (SLM) with a robust retrieval mechanism.<n>We evaluate the efficacy of this pipeline using domain-specific educational content, focusing on biology coursework.
arXiv Detail & Related papers (2025-10-04T13:33:40Z) - Online-Optimized RAG for Tool Use and Function Calling [10.294181998196555]
retrieval-augmented generation (RAG) drives tool use and function calling by embedding user queries to pre-specified tool/function descriptions.<n>Online-d RAG adapts retrieval embeddings from live interactions using minimal feedback.
arXiv Detail & Related papers (2025-09-24T09:08:46Z) - Feedback-Driven Tool-Use Improvements in Large Language Models via Automated Build Environments [70.42705564227548]
We propose an automated environment construction pipeline for large language models (LLMs)<n>This enables the creation of high-quality training environments that provide detailed and measurable feedback without relying on external tools.<n>We also introduce a verifiable reward mechanism that evaluates both the precision of tool use and the completeness of task execution.
arXiv Detail & Related papers (2025-08-12T09:45:19Z) - GUI-ReRank: Enhancing GUI Retrieval with Multi-Modal LLM-based Reranking [55.762798168494726]
GUI-ReRank is a novel framework that integrates rapid embedding-based constrained retrieval models with highly effective MLLM-based reranking techniques.<n>We evaluated our approach on an established NL-based GUI retrieval benchmark.
arXiv Detail & Related papers (2025-08-05T10:17:38Z) - FamilyTool: A Multi-hop Personalized Tool Use Benchmark [93.80355496575281]
FamilyTool is a benchmark grounded in a family-based knowledge graph (KG) that simulates personalized, multi-hop tool use scenarios.<n> Experiments reveal significant performance gaps in state-of-the-art Large Language Models (LLMs)<n>FamilyTool serves as a critical resource for evaluating and advancing LLM agents' reasoning, adaptability, and scalability in complex, dynamic environments.
arXiv Detail & Related papers (2025-04-09T10:42:36Z) - Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger [49.81945268343162]
We propose MeCo, an adaptive decision-making strategy for external tool use.<n>MeCo quantifies metacognitive scores by capturing high-level cognitive signals in the representation space.<n>MeCo is fine-tuning-free and incurs minimal cost.
arXiv Detail & Related papers (2025-02-18T15:45:01Z) - FREYR: A Framework for Recognizing and Executing Your Requests [2.4797200957733576]
This paper introduces FREYR, a streamlined framework that modularizes the tool usage process into separate steps.<n>We show that FREYR achieves superior performance compared to conventional tool usage methods.<n>We evaluate FREYR on a set of real-world test cases specific for video game design and compare it against traditional tool usage as provided by the Ollama API.
arXiv Detail & Related papers (2025-01-21T11:08:18Z) - Data-Efficient Massive Tool Retrieval: A Reinforcement Learning Approach for Query-Tool Alignment with Language Models [28.67532617021655]
Large language models (LLMs) integrated with external tools and APIs have successfully addressed complex tasks by using in-context learning or fine-tuning.
Despite this progress, the vast scale of tool retrieval remains challenging due to stringent input length constraints.
We propose a pre-retrieval strategy from an extensive repository, effectively framing the problem as the massive tool retrieval (MTR) task.
arXiv Detail & Related papers (2024-10-04T07:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.