Connecting the Dots of Knowledge in Agile Software Development
- URL: http://arxiv.org/abs/2306.05742v1
- Date: Fri, 9 Jun 2023 08:19:07 GMT
- Title: Connecting the Dots of Knowledge in Agile Software Development
- Authors: Raquel Ouriques, Tony Gorschek, Daniel Mendez, Fabian Fagerholm
- Abstract summary: This article discusses the importance of managing knowledge as a resource due to its great potential to create economic value.
We detail the types of knowledge resources, the challenges associated with their management, and potential solutions to maximise their utility.
- Score: 2.233835326994069
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This article discusses the importance of managing knowledge as a resource due
to its great potential to create economic value. We detail the types of
knowledge resources, the challenges associated with their management, and
potential solutions to maximise their utility. Our contribution is based on
empirical studies performed in an industry context.
Related papers
- Reliability Across Parametric and External Knowledge: Understanding Knowledge Handling in LLMs [11.860265967829884]
Large Language Models (LLMs) enhance their problem-solving capability by leveraging both parametric and external knowledge.
We introduce a framework for analyzing knowledge-handling based on two key dimensions: the presence of parametric knowledge and the informativeness of external knowledge.
We demonstrate that training on data constructed based on the knowledge-handling scenarios improves LLMs' reliability in integrating and utilizing knowledge.
arXiv Detail & Related papers (2025-02-19T11:49:23Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Towards Knowledge-Grounded Natural Language Understanding and Generation [1.450405446885067]
This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations.
Studies in this thesis find that incorporating relevant and up-to-date knowledge of entities benefits fake news detection.
It is established that other general forms of knowledge, such as parametric and distilled knowledge, enhance multimodal and multilingual knowledge-intensive tasks.
arXiv Detail & Related papers (2024-03-22T17:32:43Z) - InfuserKI: Enhancing Large Language Models with Knowledge Graphs via Infuser-Guided Knowledge Integration [58.61492157691623]
Methods for integrating knowledge have been developed, which augment LLMs with domain-specific knowledge graphs through external modules.
Our research focuses on a novel problem: efficiently integrating unknown knowledge into LLMs without unnecessary overlap of known knowledge.
A risk of introducing new knowledge is the potential forgetting of existing knowledge.
arXiv Detail & Related papers (2024-02-18T03:36:26Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - Materialized Knowledge Bases from Commonsense Transformers [8.678138390075077]
No materialized resource of commonsense knowledge generated this way is publicly available.
This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall.
We identify common problem cases, and outline use cases enabled by materialized resources.
arXiv Detail & Related papers (2021-12-29T20:22:05Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.